Spring 2023 CS4641/CS7641 A Homework 3¶

Instructor: Dr. Mahdi Roozbahani¶

Deadline: Friday, April 7th, 11:59 pm EST¶

  • No unapproved extension of the deadline is allowed. Submission past our 48-hour penalized acceptance period will lead to 0 credit.

  • Discussion is encouraged on Ed as part of the Q/A. However, all assignments should be done individually.

  • Plagiarism is a serious offense. You are responsible for completing your own work. You are not allowed to copy and paste, or paraphrase, or submit materials created or published by others, as if you created the materials. All materials submitted must be your own.</font>
  • All incidents of suspected dishonesty, plagiarism, or violations of the Georgia Tech Honor Code will be subject to the institute’s Academic Integrity procedures. If we observe any (even small) similarities/plagiarisms detected by Gradescope or our TAs, WE WILL DIRECTLY REPORT ALL CASES TO OSI, which may, unfortunately, lead to a very harsh outcome. Consequences can be severe, e.g., academic probation or dismissal, grade penalties, a 0 grade for assignments concerned, and prohibition from withdrawing from the class. </font>

Instructions for the assignment¶

  • This assignment consists of both programming and theory questions.

  • Unless a theory question explicitly states that no work is required to be shown, you must provide an explanation, justification, or calculation for your answer.

  • To switch between cell for code and for markdown, see the menu -> Cell -> Cell Type

  • You can directly type Latex equations into markdown cells.

  • If a question requires a picture, you could use this syntax <img src="" style="width: 300px;"/> to include them within your ipython notebook.

  • Your write up must be submitted in PDF form. You may use either Latex, markdown, or any word processing software. We will **NOT** accept handwritten work. Make sure that your work is formatted correctly, for example submit $\sum_{i=0} x_i$ instead of \text{sum_{i=0} x_i}

  • When submitting the non-programming part of your assignment, you must correctly map pages of your PDF to each question/subquestion to reflect where they appear. **Improperly mapped questions may not be graded correctly and/or will result in point deductions for the error.**
  • All assignments should be done individually, and each student must write up and submit their own answers.
  • Graduate Students: You are required to complete any sections marked as Bonus for Undergrads

Using the autograder¶

  • Grads will find three assignments on Gradescope that correspond to HW3: "Assignment 3 Programming", "Assignment 3 - Non-programming" and "Assignment 3 Programming - Bonus for all". Undergrads will find an additional assignment called "Assignment 3 Programming - Bonus for Undergrads".
  • You will submit your code for the autograder in the Assignment 3 Programming sections. Please refer to the Deliverables and Point Distribution section for what parts are considered required, bonus for undergrads, and bonus for all.

  • We provided you different .py files and we added libraries in those files please DO NOT remove those lines and add your code after those lines. Note that these are the only allowed libraries that you can use for the homework.

  • You are allowed to make as many submissions until the deadline as you like. Additionally, note that the autograder tests each function separately, therefore it can serve as a useful tool to help you debug your code if you are not sure of what part of your implementation might have an issue.

  • For the "Assignment 3 - Non-programming" part, you will need to submit to Gradescope a PDF copy of your Jupyter Notebook with the cells ran. See this EdStem Post for multiple ways on to convert your .ipynb into a .pdf file. Please refer to the Deliverables and Point Distribution section for an outline of the non-programming questions.

  • When submitting to Gradescope, please make sure to mark the page(s) corresponding to each problem/sub-problem. The pages in the PDF should be of size 8.5" x 11", otherwise there may be a deduction in points for extra long sheets.

Using the local tests ¶

  • For some of the programming questions we have included a local test using a small toy dataset to aid in debugging. The local test sample data and outputs are stored in .py files in the local_tests_folder. The actual local tests are stored in localtests.py.
  • There are no points associated with passing or failing the local tests, you must still pass the autograder to get points.
  • It is possible to fail the local test and pass the autograder since the autograder has a certain allowed error tolerance while the local test allowed error may be smaller. Likewise, passing the local tests does not guarantee passing the autograder.
  • You do not need to pass both local and autograder tests to get points, passing the Gradescope autograder is sufficient for credit.
  • It might be helpful to comment out the tests for functions that have not been completed yet.
  • It is recommended to test the functions as it gets completed instead of completing the whole class and then testing. This may help in isolating errors. Do not solely rely on the local tests, continue to test on the autograder regularly as well.

Deliverables and Points Distribution¶

Q1: Image Compression [30pts]¶

Deliverables: imgcompression.py and printed results¶

  • 1.1 Image Compression [20 pts] - programming

    • svd [4pts]

    • compress [4pts]

    • rebuild_svd [4pts]

    • compression_ratio [4pts]

    • recovered_variance_proportion [4pts]

  • 1.2 Black and White [5 pts] non-programming

  • 1.3 Color Image [5 pts] non-programming

Q2: Understanding PCA [20pts]¶

Deliverables: pca.py and written portion¶

  • 2.1 PCA Implementation [10 pts] - programming

    • fit [5pts]

    • transform [2pts]

    • transform_rv [3pts]

  • 2.2 Visualize [5 pts] programming and non-programming

  • 2.3 PCA Reduced Facemask Dataset Analysis [5 pts] non-programming

  • 2.4 PCA Exploration [0 pts]

Q3: Regression and Regularization [80pts: 50pts + 20pts Bonus for Undergrads + 10pts Bonus for All]¶

Deliverables: regression.py and Written portion¶

  • 3.1 Regression and Regularization Implementations [50pts: 30pts + 20pts Bonus for Undergrad] - programming

    • RMSE [5pts]

    • Construct Poly Features 1D [2pts]

    • Construct Poly Features 2D [3pts]

    • Prediction [5pts]

    • Linear Fit Closed Form [5pts]

    • Ridge Fit Closed Form [5pts]

    • Cross Validation [5pts]

    • Linear Gradient Descent [5pts] Bonus for Undergrad

    • Linear Stochastic Gradient Descent [5pts] Bonus for Undergrad

    • Ridge Gradient Descent [5pts] Bonus for Undergrad

    • Ridge Stochastic Gradient Descent [5pts] Bonus for Undergrad

  • 3.2 About RMSE [3 pts] non-programming

  • 3.3 Testing: General Functions and Linear Regression [5 pts] non-programming

  • 3.4 Testing: Ridge Regression [5 pts] non-programming

  • 3.5 Cross Validation [7 pts] non-programming

  • 3.6 Noisy Input Samples in Linear Regression [10 pts] non-programming BONUS FOR ALL

Q4: Naive Bayes and Logistic Regression [35pts]¶

Deliverables: logistic_regression.py and Written portion¶

  • 4.1 Llama Breed Problem using Naive Bayes [5 pts] non-programming

  • 4.2 News Data Sentiment Classification Using Logistic Regression [30 pts] - programming

    • sigmoid [2 pts]

    • bias_augment [3 pts]

    • predict_probs [5 pts]

    • predict_labels [2 pts]

    • loss [3 pts]

    • gradient [3 pts]

    • accuracy [2 pts]

    • evaluate [5 pts]

    • fit [5 pts]

Q5: Noise in PCA and Linear Regression [15pts]¶

Deliverables: Written portion¶

  • 5.1 Slope Functions [5 pts] non-programming

  • 5.2 Error in Y and Error in X and Y [5 pts] non-programming

  • 5.3 Analysis [5 pts] non-programming

Q6: Feature Reduction.py [25pts Bonus for All]¶

Deliverables: feature_reduction.py and Written portion¶

  • 6.1 Feature Reduction [18 pts] - programming

    • forward_selection [9pts]

    • backward_elimination [9pts]

  • 6.2 Feature Selection - Discussion [7 pts] non-programming

Q7: Movie Recommendation with SVD [10pts Bonus for All]¶

Deliverables: svd_recommender.py and Written portion¶

  • 7.1 SVD Recommender

    • recommender_svd [5pts]

    • predict [5pts]

  • 7.2 Visualize Movie Vectors [0pts]

0 Set up¶

This notebook is tested under python 3. ., and the corresponding packages can be downloaded from miniconda. You may also want to get yourself familiar with several packages:

  • jupyter notebook
  • numpy
  • matplotlib
  • sklearn

There is also a VS Code and Anaconda Setup Tutorial on Ed under the "Links" category

Please implement the functions that have raise NotImplementedError, and after you finish the coding, please delete or comment out raise NotImplementedError.

Library imports¶

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
# This is cell which sets up some of the modules you might need 
# Please do not change the cell or import any additional packages. 

import numpy as np
import pandas as pd
import matplotlib
from matplotlib import pyplot as plt
from sklearn.feature_extraction import text
from sklearn.datasets import load_diabetes, load_breast_cancer, load_iris
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, accuracy_score
import warnings
import sys

print('Version information')

print('python: {}'.format(sys.version))
print('matplotlib: {}'.format(matplotlib.__version__))
print('numpy: {}'.format(np.__version__))

warnings.filterwarnings('ignore')

%matplotlib inline
%load_ext autoreload
%autoreload 2

STUDENT_VERSION = 1
EO_TEXT, EO_FONT, EO_COLOR = 'TA VERSION', 'Arial Black', 'gray', 
EO_ALPHA, EO_SIZE, EO_ROT = 0.7, 90, 40
Version information
python: 3.10.9 | packaged by Anaconda, Inc. | (main, Mar  8 2023, 10:42:25) [MSC v.1916 64 bit (AMD64)]
matplotlib: 3.7.1
numpy: 1.23.5

Q1: Image Compression [30 pts] **[P]** | **[W]**¶

Load images data and plot¶

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
# load Image
image = plt.imread("./data/hw3_image_compression.jpeg")/255
# plot image
fig = plt.figure(figsize=(10,10))
if not STUDENT_VERSION:
    fig.text(0.5, 0.5, EO_TEXT, transform=fig.transFigure,
        fontsize=EO_SIZE, color=EO_COLOR, alpha=EO_ALPHA, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)
plt.imshow(image)
Out[ ]:
<matplotlib.image.AxesImage at 0x2649995b400>
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
def rgb2gray(rgb):   
    return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])

fig = plt.figure(figsize=(10, 10))

if not STUDENT_VERSION:
    fig.text(0.5, 0.5, EO_TEXT, transform=fig.transFigure,
        fontsize=EO_SIZE, color=EO_COLOR, alpha=EO_ALPHA, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)
# plot several images
plt.imshow(rgb2gray(image), cmap=plt.cm.bone)
Out[ ]:
<matplotlib.image.AxesImage at 0x26494548ee0>

1.1 Image compression [20pts] **[P]**¶

SVD is a dimensionality reduction technique that allows us to compress images by throwing away the least important information.

Higher singular values capture greater variance and, thus, capture greater information from the corresponding singular vector. To perform image compression, apply SVD on each matrix and get rid of the small singular values to compress the image. The loss of information through this process is negligible, and the difference between the images can be hardly spotted.

For example, the proportion of variance captured by the first component is $$\frac{\sigma_1^2}{\sum_{i=1}^n \sigma_i^2}$$ where $\sigma_i$ is the $i^{th}$ singular value.

In the imgcompression.py file, complete the following functions:

  • svd: You may use np.linalg.svd in this function, and although the function defaults this parameter to true, you may explicitly set full_matrices=True using the optional full_matrices parameter. Hint 2 may be useful.
  • compress
  • rebuild_svd
  • compression_ratio: Hint 1 may be useful
  • recovered_variance_proportion: Hint 1 may be useful

HINT 1: http://timbaumann.info/svd-image-compression-demo/ is a useful article on image compression and compression ratio. You may find this article useful for implementing the functions compression_ratio and recovered_variance_proportion

HINT 2: If you have never used np.linalg.svd, it might be helpful to read Numpy's SVD documentation and note the particularities of the $V$ matrix and that it is returned already transposed.

HINT 3: The shape of $S$ resulting from SVD may change depending on if N > D, N < D, or N = D. Therefore, when checking the shape of $S$, note that min(N,D) means the value should be equal to whichever is lower between N and D.

1.1.1 Local Tests for Image Compression Black and White Case [No Points]¶

You may test your implementation of the functions contained in imgcompression.py in the cell below. Feel free to comment out tests for functions that have not been completed yet. See Using the Local Tests for more details.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

from utilities.localtests import TestImgCompression

unittest_ic = TestImgCompression()
unittest_ic.test_svd_bw()
unittest_ic.test_compress_bw()
unittest_ic.test_rebuild_svd_bw()
unittest_ic.test_compression_ratio_bw()
unittest_ic.test_recovered_variance_proportion_bw()
UnitTest passed successfully for "SVD calculation - black and white images"!
UnitTest passed successfully for "Image compression - black and white images"!
UnitTest passed successfully for "SVD reconstruction - black and white images"!
UnitTest passed successfully for "Compression ratio - black and white images"!
UnitTest passed successfully for "Recovered variance proportion - black and white images"!

1.1.2 Local Tests for Image Compression Color Case [No Points]¶

You may test your implementation of the functions contained in imgcompression.py in the cell below. Feel free to comment out tests for functions that have not been completed yet. See Using the Local Tests for more details.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

from utilities.localtests import TestImgCompression

unittest_ic = TestImgCompression()

unittest_ic.test_svd_color()
unittest_ic.test_compress_color()
unittest_ic.test_rebuild_svd_color()
unittest_ic.test_compression_ratio_color()
unittest_ic.test_recovered_variance_proportion_color()
UnitTest passed successfully for "SVD calculation - color images"!
UnitTest passed successfully for "Image compression - color images"!
UnitTest passed successfully for "SVD reconstruction - color images"!
UnitTest passed successfully for "Compression ratio - color images"!
UnitTest passed successfully for "Recovered variance proportion - color images"!

1.2.1 Black and white [5 pts] **[W]**¶

This question will use your implementation of the functions from Q1.1 to generate a set of images compressed to different degrees. You can simply run the below cell without making any changes to it, assuming you have implemented the functions in Q1.1.

Make sure these images are displayed when submitting the PDF version of the Juypter notebook as part of the non-programming submission of this assignment.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
from imgcompression import ImgCompression

imcompression = ImgCompression()
bw_image = rgb2gray(image)
U, S, V = imcompression.svd(bw_image)
component_num = [1,2,5,10,20,40,80,160,256]

fig = plt.figure(figsize=(18, 18))

# plot several images
i=0
for k in component_num:
    U_compressed, S_compressed, V_compressed = imcompression.compress(U, S, V, k)
    img_rebuild = imcompression.rebuild_svd(U_compressed, S_compressed, V_compressed)
    c = np.around(imcompression.compression_ratio(bw_image, k), 4)
    r = np.around(imcompression.recovered_variance_proportion(S, k), 3)
    ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
    ax.imshow(img_rebuild, cmap=plt.cm.bone)
    ax.set_title(f"{k} Components")
    if not STUDENT_VERSION:
        ax.text(0.5, 0.5, EO_TEXT, transform=ax.transAxes,
            fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA, fontname=EO_FONT,
            ha='center', va='center', rotation=EO_ROT)
    ax.set_xlabel(f"Compression: {c},\nRecovered Variance: {r}")
    i = i+1

1.2.2 Black and White Compression Savings [No Points]¶

This question will use your implementation of the functions from Q1.1 to compare the number of bytes required to represent the SVD decomposition for the original image to the compressed image using different degrees of compression. You can simply run the below cell without making any changes to it, assuming you have implemented the functions in Q1.1.

Running this cell is primarily for your own understanding of the compression process.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
from imgcompression import ImgCompression

imcompression = ImgCompression()
bw_image = rgb2gray(image)
U, S, V = imcompression.svd(bw_image)

component_num = [1,2,5,10,20,40,80,160,256]

# Compare memory savings for BW image
for k in component_num:
    og_bytes, comp_bytes, savings = imcompression.memory_savings(bw_image, U, S, V, k)
    comp_ratio = (og_bytes/comp_bytes)
    og_bytes = imcompression.nbytes_to_string(og_bytes)
    comp_bytes = imcompression.nbytes_to_string(comp_bytes)
    savings = imcompression.nbytes_to_string(savings)
    print(f"{k} components: Original Image: {og_bytes} -> Compressed Image: {comp_bytes}, Savings: {savings}, Compression Ratio {comp_ratio:.1f}:1")
1 components: Original Image: 12.207 MB -> Compressed Image: 20.32 KB, Savings: 12.187 MB, Compression Ratio 615.1:1
2 components: Original Image: 12.207 MB -> Compressed Image: 40.641 KB, Savings: 12.167 MB, Compression Ratio 307.6:1
5 components: Original Image: 12.207 MB -> Compressed Image: 101.602 KB, Savings: 12.108 MB, Compression Ratio 123.0:1
10 components: Original Image: 12.207 MB -> Compressed Image: 203.203 KB, Savings: 12.009 MB, Compression Ratio 61.5:1
20 components: Original Image: 12.207 MB -> Compressed Image: 406.406 KB, Savings: 11.81 MB, Compression Ratio 30.8:1
40 components: Original Image: 12.207 MB -> Compressed Image: 812.812 KB, Savings: 11.413 MB, Compression Ratio 15.4:1
80 components: Original Image: 12.207 MB -> Compressed Image: 1.588 MB, Savings: 10.62 MB, Compression Ratio 7.7:1
160 components: Original Image: 12.207 MB -> Compressed Image: 3.175 MB, Savings: 9.032 MB, Compression Ratio 3.8:1
256 components: Original Image: 12.207 MB -> Compressed Image: 5.08 MB, Savings: 7.127 MB, Compression Ratio 2.4:1

1.3.1 Color image [5 pts] **[W]**¶

This section will use your implementation of the functions from Q1.1 to generate a set of images compressed to different degrees. You can simply run the below cell without making any changes to it, assuming you have implemented the functions in Q1.1.

Make sure these images are displayed when submitting the PDF version of the Juypter notebook as part of the non-programming submission of this assignment.

NOTE: You might get warning "Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers)." This warning is acceptable since some of the pixels may go above 1.0 while rebuilding. You should see similar images to original even with such clipping.

HINT 1: Make sure your implementation of recovered_variance_proportion returns an array of 3 floats for a color image.
HINT 2: Try performing SVD on the individual color channels and then stack the individual channel $U$, $S$, $V$ matrices.
HINT 3: You may need separate implementations for a color or grayscale image in the same function.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
from imgcompression import ImgCompression

imcompression = ImgCompression()
image_rolled = np.moveaxis(image, -1,0)
U, S, V = imcompression.svd(image_rolled)

component_num = [1,2,5,10,20,40,80,160,256]

fig = plt.figure(figsize=(18, 18))

# plot several images
i=0
for k in component_num:
    U_compressed, S_compressed, V_compressed = imcompression.compress(U, S, V, k)
    img_rebuild = np.clip(imcompression.rebuild_svd(U_compressed, S_compressed, V_compressed),0,1)
    img_rebuild = np.moveaxis(img_rebuild, 0,-1)
    c = np.around(imcompression.compression_ratio(image_rolled, k), 4)
    r = np.around(imcompression.recovered_variance_proportion(S, k), 3)
    ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
    ax.imshow(img_rebuild)
    ax.set_title(f"{k} Components")
    if not STUDENT_VERSION:
        ax.text(0.5, 0.5, EO_TEXT, transform=ax.transAxes,
            fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA, fontname=EO_FONT,
            ha='center', va='center', rotation=EO_ROT)
    ax.set_xlabel(f"Compression: {np.around(c,4)},\nRecovered Variance:  R: {r[0]}  G: {r[1]}  B: {r[2]}")
    i = i+1

1.3.2 Color Compression Savings [No Points]¶

This question will use your implementation of the functions from Q1.1 to compare the number of bytes required to represent the SVD decomposition for the original image to the compressed image using different degrees of compression. You can simply run the below cell without making any changes to it, assuming you have implemented the functions in Q1.1.

Running this cell is primarily for your own understanding of the compression process.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
from imgcompression import ImgCompression

imcompression = ImgCompression()
U, S, V = imcompression.svd(image_rolled)

component_num = [1,2,5,10,20,40,80,160,256]

# Compare the memory savings of the color image
i=0
for k in component_num:
    og_bytes, comp_bytes, savings = imcompression.memory_savings(image_rolled, U, S, V, k)
    comp_ratio = (og_bytes/comp_bytes)
    og_bytes = imcompression.nbytes_to_string(og_bytes)
    comp_bytes = imcompression.nbytes_to_string(comp_bytes)
    savings = imcompression.nbytes_to_string(savings)
    print(f"{k} components: Original Image: {og_bytes} -> Compressed Image: {comp_bytes}, Savings: {savings}, Compression Ratio {comp_ratio:.1f}:1")
1 components: Original Image: 36.621 MB -> Compressed Image: 60.961 KB, Savings: 36.562 MB, Compression Ratio 615.1:1
2 components: Original Image: 36.621 MB -> Compressed Image: 121.922 KB, Savings: 36.502 MB, Compression Ratio 307.6:1
5 components: Original Image: 36.621 MB -> Compressed Image: 304.805 KB, Savings: 36.323 MB, Compression Ratio 123.0:1
10 components: Original Image: 36.621 MB -> Compressed Image: 609.609 KB, Savings: 36.026 MB, Compression Ratio 61.5:1
20 components: Original Image: 36.621 MB -> Compressed Image: 1.191 MB, Savings: 35.43 MB, Compression Ratio 30.8:1
40 components: Original Image: 36.621 MB -> Compressed Image: 2.381 MB, Savings: 34.24 MB, Compression Ratio 15.4:1
80 components: Original Image: 36.621 MB -> Compressed Image: 4.763 MB, Savings: 31.859 MB, Compression Ratio 7.7:1
160 components: Original Image: 36.621 MB -> Compressed Image: 9.525 MB, Savings: 27.096 MB, Compression Ratio 3.8:1
256 components: Original Image: 36.621 MB -> Compressed Image: 15.24 MB, Savings: 21.381 MB, Compression Ratio 2.4:1

Q2: Understanding PCA [20 pts] **[P]** | **[W]**¶

Principal Component Analysis (PCA) is another dimensionality reduction technique that reduces dimensions by eliminating small variance eigenvalues and their vectors. With PCA, we center the data first by subtracting the mean of each feature. Each singular value tells us how much of the variance of a matrix (e.g. image) is captured in each component. In this problem, we will investigate how PCA can be used to improve features for regression and classification tasks and how the data itself affects the behavior of PCA.

Here we will implement PCA using Singular Value Decomposition (SVD). Recall from class that in PCA, we project the original matrix $X$ into new components, each one corresponding to an eigenvector of the covariance matrix $X^T X$. SVD decomposes $X$ into three matrices $U$, $S$ and $V^T$. This also leads us a decomposition of the covariance matrix as: $$X^TX = (USV^T)^TUSV^T = (VS^TU^T)USV^T = V S^2 V^T $$

This means two important things for us:

  • The matrix $V^T$, often referred to as the right singular vectors of $X$, is equivalent to the eigenvectors of $X^TX$.
  • $S^2$ is equivalent to the eigenvalues of $X^TX$.

So the first $n$-principal components are obtained by projecting $X$ by the first $n$ vectors from $V^T$. Similarly, $S^2$ gives a measure of the variance retained.

2.1 Implementation [10 pts] **[P]**¶

Implement PCA. In the pca.py file, complete the following functions:

  • fit: You may use np.linalg.svd. Set full_matrices=False. Hint 1 may be useful.
  • transform
  • transform_rv: You may find np.cumsum helpful for this function.

Assume a dataset is composed of N datapoints, each of which has D features with D < N. The dimension of our data would be D. However, it is possible that many of these dimensions contain redundant information. Each feature explains part of the variance in our dataset, and some features may explain more variance than others.

HINT 1: Make sure you remember to first center your data by subtracting the mean of each feature.

2.1.1 Local Tests for PCA [No Points]¶

You may test your implementation of the functions contained in pca.py in the cell below. Feel free to comment out tests for functions that have not been completed yet. See Using the Local Tests for more details.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

from utilities.localtests import TestPCA

unittest_pca = TestPCA()
unittest_pca.test_pca()
unittest_pca.test_transform()
unittest_pca.test_transform_rv()
UnitTest passed successfully for "PCA fit"!
UnitTest passed successfully for "PCA transform"!
UnitTest passed successfully for "PCA transform with recovered variance"!

2.2 Visualize [5 pts] **[W]**¶

PCA is used to transform multivariate data tables into smaller sets so as to observe the hidden trends and variations in the data. It can also be used as a feature extractor for images. Here you will visualize two datasets using PCA, first is the iris dataset and then a dataset of masked and unmasked images.

In the pca.py, complete the following function:

  • visualize: Use your implementation of PCA and reduce the datasets such that they contain only two features. Using Matplotlib's Pyplot, create 2-D scatter plots of the data points using these features. Make sure to differentiate the data points according to their true labels using color.

The datasets have already been loaded for you in the subsequent cells.

NOTE: Here, we won't be testing for accuracy. Even with correct implementations of PCA, the accuracy can differ from the TA solution. That is fine as long as the visualizations come out similar.

Iris Dataset¶

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

# Use PCA for visualization of iris dataset

from pca import PCA

iris_data = load_iris(return_X_y=True)

X = iris_data[0]
y = iris_data[1]

fig = plt.figure()
plt.title('Iris Dataset with Dimensionality Reduction')
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
PCA().visualize(X,y,fig)
Data shape before PCA: (150, 4)
Data shape before PCA: (150, 2)
Labels: [0 1 2]
No artists with labels found to put in legend.  Note that artists whose label start with an underscore are ignored when legend() is called with no argument.

2.3 PCA Reduced Facemask Dataset Analysis [5 pts] **[W]**¶

Facemask Dataset¶

The masked and unmasked dataset is made up of grayscale images of human faces facing forward. Half of these images are faces that are completely unmasked, and the remaining images show half of the face covered with an artificially generated face mask. The images have already been preprocessed, they are also reduced to a small size of 64x64 pixels and then reshaped into a feature vector of 4096 pixels. Below is a sample of some of the images in the dataset.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

X = np.load('./data/smallflat_64.npy')
y = np.load('./data/masked_labels.npy').astype('int')
i = 0
fig = plt.figure(figsize=(18, 18))
for idx in [0,1,2,150,151,152]:
    ax = fig.add_subplot(6, 6, i + 1, xticks=[], yticks=[])
    ax.imshow(X[idx].reshape(64, 64), cmap = 'gray')
    m_status = 'Unmasked' if idx < 150 else 'Masked'
    ax.set_title(f"{m_status} Image at i = {idx}")
    i += 1
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
# Use PCA for visualization of masked and unmasked images

X = np.load('./data/smallflat_64.npy')
y = np.load('./data/masked_labels.npy')

fig = plt.figure()
plt.title('Facemask Dataset Visualization with Dimensionality Reduction')
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
PCA().visualize(X,y, fig)
print('*In this plot, the 0 points are unmasked images and the 1 points are masked images.')
Data shape before PCA: (300, 4096)
Data shape before PCA: (300, 2)
Labels: [0. 1.]
No artists with labels found to put in legend.  Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
*In this plot, the 0 points are unmasked images and the 1 points are masked images.

What do you think of this 2 dimensional plot, knowing that the original dataset was originally a set of flattened image vectors that had 4096 pixels/features?.

  1. Look at the 2-dimensional plot above. If the facemask dataset that has been reduced to 2 features was fed into a classifier, do you think the classifier would produce high accuracy or low accuracy in comparison to the original dataset which had 4096 pixels/features? Why? You can refer to the 2D visualization made above (One or two sentences will suffice for this question) (3 pts)

    Answer: Higher accuracy. Here, we can see that separating the data in an almost horizontal line around $\sim$ -0.1 there would be one or two misclassification, i.e. minimum accuracy of $99.33\%$. In the other hand, we can not say the same for the original dataset in the same classifier -- as a matter of fact, it is unlikely that this would be the case.

  1. Assuming an equal rate of accuracy, what do you think is the main advantage in feeding a classifier a dataset with 2 features vs a dataset with 4096 features? (One sentence will suffice for this question.) (2 pts)

    Answer: Interpretability, as one can plot it and check by oneself what the classifier is doing. Another equally important advantage would be runtime, especially in large datasets.

2.4 PCA Exploration [No Points]¶

Note The accuracy can differ from the TA solution and this section is not graded.

Emotion Dataset [No Points]¶

Now you will use PCA on an actual real-world dataset. We will use your implementation of PCA function to reduce the dataset with 99% retained variance and use it to obtain the reduced features. On the reduced dataset, we will use logistic and linear regression to compare results between PCA and non-PCA datasets. Run the following cells to see how PCA works on regression and classification tasks.

The first dataset we will use is an emotion dataset made up of grayscale images of human faces faces that are visibly happy and visibly sad. Note how Accuracy increases after reducing the number of features used.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

X = np.load('./data/emotion_features.npy')
y = np.load('./data/emotion_labels.npy').astype('int')
i = 0
fig = plt.figure(figsize=(18, 18))
for idx in [0,1,2,150,151,152]:
    ax = fig.add_subplot(6, 6, i + 1, xticks=[], yticks=[])
    ax.imshow(X[idx].reshape(64, 64), cmap = 'gray')
    m_status = 'Sad' if idx < 150 else 'Happy'
    ax.set_title(f"{m_status} Image at i = {idx}")
    i += 1
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

X = np.load('./data/emotion_features.npy')
y = np.load('./data/emotion_labels.npy').astype('int')

print("Not Graded - Data shape before PCA ",X.shape)

pca = PCA()
pca.fit(X)

X_pca = pca.transform_rv(X, retained_variance = 0.99)

print("Not Graded - Data shape with PCA ",X_pca.shape)
Not Graded - Data shape before PCA  (600, 4096)
Not Graded - Data shape with PCA  (600, 599)
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
# Train, test splits
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, 
                                                    stratify=y, 
                                                    random_state=42)

# Use logistic regression to predict classes for test set
clf = LogisticRegression()
clf.fit(X_train, y_train)
preds = clf.predict_proba(X_test)
print('Not Graded - Accuracy before PCA: {:.5f}'.format(accuracy_score(y_test, 
                                                preds.argmax(axis=1))))
Not Graded - Accuracy before PCA: 0.95000
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
# Train, test splits
X_train, X_test, y_train, y_test = train_test_split(X_pca, y, test_size=.3, 
                                                    stratify=y, 
                                                    random_state=42)

# Use logistic regression to predict classes for test set
clf = LogisticRegression()
clf.fit(X_train, y_train)
preds = clf.predict_proba(X_test)
print('Not Graded - Accuracy after PCA: {:.5f}'.format(accuracy_score(y_test, 
                                                preds.argmax(axis=1))))
Not Graded - Accuracy after PCA: 0.95000

Now we will explore sklearn's Diabetes dataset using PCA dimensionality reduction and regression. Notice the RMSE score reduction after we apply PCA.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
from sklearn.linear_model import RidgeCV
def apply_regression(X_train, y_train, X_test):
    ridge =  RidgeCV(alphas=[1e-3, 1e-2, 1e-1, 1])
    clf = ridge.fit(X_train, y_train)
    y_pred = ridge.predict(X_test)
    
    return y_pred
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
#load the dataset 
diabetes = load_diabetes()
X = diabetes.data
y = diabetes.target

print(X.shape, y.shape)

pca = PCA()
pca.fit(X)

X_pca = pca.transform_rv(X, retained_variance = 0.9)
print("Not Graded - data shape with PCA ",X_pca.shape)
(442, 10) (442,)
Not Graded - data shape with PCA  (442, 9)
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
# Train, test splits
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, random_state=42)

#Ridge regression without PCA
y_pred = apply_regression(X_train, y_train, X_test)

# calculate RMSE 
rmse_score = np.sqrt(mean_squared_error(y_pred, y_test))
print('Not Graded - RMSE score using Ridge Regression before PCA: {:.5}'.format(rmse_score))
Not Graded - RMSE score using Ridge Regression before PCA: 53.101
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
#Ridge regression with PCA
X_train, X_test, y_train, y_test = train_test_split(X_pca, y, test_size=.3, random_state=42)

#use Ridge Regression for getting predicted labels
y_pred = apply_regression(X_train,y_train,X_test)

#calculate RMSE 
rmse_score = np.sqrt(mean_squared_error(y_pred, y_test))
print('Not Graded - RMSE score using Ridge Regression after PCA: {:.5}'.format(rmse_score))
Not Graded - RMSE score using Ridge Regression after PCA: 52.989

Q3 Polynomial regression and regularization [80pts: 50pts + 20pts Bonus for Undergrads + 10pts Bonus for All] **[P]** | **[W]**¶

3.1 Regression and regularization implementations [50pts: 30 pts + 20 pts bonus for CS 4641] **[P]**¶

We have three methods to fit linear and ridge regression models: 1) closed form solution; 2) gradient descent (GD); 3) stochastic gradient descent (SGD). Some of the functions are bonus, see the below function list on what is required to be implemented for graduate and undergraduate students. We use the term weight in the following code. Weights and parameters ($\theta$) have the same meaning here. We used parameters ($\theta$) in the lecture slides.

In the regression.py file, complete the Regression class by implementing the listed functions below. We have provided the Loss function, $L$, associated with the GD and SGD function for Linear and Ridge Regression for deriving the gradient update.

  • rmse
  • construct_polynomial_feats
  • predict
  • linear_fit_closed: You should use np.linalg.pinv in this function
  • linear_fit_GD (bonus for undergrad, required for grad): $$ L_{\text{linear, GD}}(\theta) = \dfrac{1}{2N} \sum_{i=0}^{N} [y_i - \hat{y}_i(\theta)]^2 \quad\quad y_i = \text{label}, \, \hat{y}_i(\theta) = \text{prediction} $$

  • linear_fit_SGD (bonus for undergrad, required for grad): $$ L_{\text{linear, SGD}}(\theta) = \dfrac{1}{2} [y_i - \hat{y}_i(\theta)]^2 \quad\quad y_i = \text{label}, \, \hat{y}_i(\theta) = \text{prediction} $$

  • ridge_fit_closed: You should adjust your I matrix to handle the bias term differently than the rest of the terms

  • ridge_fit_GD (bonus for undergrad, required for grad): $$ L_{\text{ridge, GD}}(\theta) = L_{\text{linear, GD}}(\theta) + \dfrac{c_{\lambda}}{2}\theta^T\theta $$

  • ridge_fit_SGD (bonus for undergrad, required for grad):

$$ L_{\text{ridge, SGD}}(\theta) = L_{\text{linear, SGD}}(\theta) + \dfrac{c_{\lambda}}{2N}\theta^T\theta $$
  • ridge_cross_validation: Use ridge_fit_closed for this function

IMPORTANT NOTE:

  • Use your RMSE function to calculate actual loss when coding GD and SGD, but use the loss listed above to derive the gradient update.

The points for each function is in the Deliverables and Points Distribution section.

3.1.1 Local Tests for Helper Regression Functions [No Points]¶

You may test your implementation of the functions contained in regression.py in the cell below. Feel free to comment out tests for functions that have not been completed yet. See Using the Local Tests for more details.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

from utilities.localtests import TestRegression

unittest_reg = TestRegression()
unittest_reg.test_rmse()
unittest_reg.test_construct_polynomial_feats()
unittest_reg.test_predict()
UnitTest passed successfully for "RMSE"!
UnitTest passed successfully for "Polynomial feature construction"!
UnitTest passed successfully for "Linear regression prediction"!

3.1.2 Local Tests for Linear Regression Functions [No Points]¶

You may test your implementation of the functions contained in regression.py in the cell below. Feel free to comment out tests for functions that have not been completed yet. See Using the Local Tests for more details.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

from utilities.localtests import TestRegression

unittest_reg = TestRegression()
unittest_reg.test_linear_fit_closed()
UnitTest passed successfully for "Closed form linear regression"!

3.1.3 Local Tests for Ridge Regression Functions [No Points]¶

You may test your implementation of the functions contained in regression.py in the cell below. Feel free to comment out tests for functions that have not been completed yet. See Using the Local Tests for more details.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

from utilities.localtests import TestRegression

unittest_reg = TestRegression()
unittest_reg.test_ridge_fit_closed()
# unittest_reg.test_ridge_cross_validation()
UnitTest passed successfully for "Closed form ridge regression"!

3.1.4 Local Tests for Gradient Descent and SGD (Bonus for Undergrad Tests) [No Points]¶

You may test your implementation of the functions contained in regression.py in the cell below. Feel free to comment out tests for functions that have not been completed yet. See Using the Local Tests for more details.

In [ ]:
from utilities.localtests import TestRegression

unittest_reg = TestRegression()
unittest_reg.test_linear_fit_GD()
unittest_reg.test_linear_fit_SGD()
unittest_reg.test_ridge_fit_GD()
unittest_reg.test_ridge_fit_SGD()
UnitTest passed successfully for "Gradient descent linear regression"!
UnitTest passed successfully for "Stochastic gradient descent linear regression"!
UnitTest passed successfully for "Gradient descent ridge regression"!
UnitTest passed successfully for "Stochastic gradient descent ridge regression"!

3.2 About RMSE [3 pts] **[W]**¶

What is a good RMSE value?

If we normalize our labels such that the true labels $y$ and the model outputs $\hat{y}$ can only be between 0 and 1, what does it mean when the RMSE = 1? Please provide an example with your explanation.

ANSWER: For a normalized dataset, having an RMSE = 1 means that the model is always predicting very far from the true label. If you consider a set $y = [0\; 0.25\; 0.5\; 0.75\; 1]$, and the worse possible prediction, which is the farthest from the ground truth $\hat{y} = [1\; 1\;0\;0\;0]$, in this case $RMSE = 0.83$, so in order to approach RMSE = 1 you would need a very large set with just wrongful predictions, and even then, to actually achieve RMSE = 1 you would probably need to have a binary set, i.e. labels just 0 or 1, and still predict all of them wrong.

3.3 Testing: General Functions and Linear Regression [5 pts] **[W]**¶

In this section. we will test the performance of the linear regression. As long as your test RMSE score is close to the TA's answer (TA's answer $\pm 0.05$), you can get full points. Let's first construct a dataset for polynomial regression.

In this case, we construct the polynomial features up to degree 5. Each data sample consists of two features $[a,b]$. We compute the polynomial features of both $a$ and $b$ in order to yield the vectors $[1,a,a^2,a^3, \ldots, a^{\text{degree}}]$ and $[1,b,b^2,b^3, \ldots, b^{\text{degree}}]$. We train our model with the cartesian product of these polynomial features. The cartesian product generates a new feature vector consisting of all polynomial combinations of the features with degree less than or equal to the specified degree.

For example, if degree = 2, we will have the polynomial features $[1,a,a^2]$ and $[1,b,b^2]$ for the datapoint $[a,b]$. The cartesian product of these two vectors will be $[1,a,b,ab,a^2,b^2]$. We do not generate $a^3$ and $b^3$ since their degree is greater than 2 (specified degree).

In [ ]:
from regression import Regression
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
POLY_DEGREE = 7
N_SAMPLES = 1200

rng = np.random.RandomState(seed=10)

# Simulating a regression dataset with polynomial features.
true_weight = rng.rand(POLY_DEGREE ** 2 + 2, 1)
x_feature1 = np.linspace(-5, 5, N_SAMPLES)
x_feature2 = np.linspace(-3, 3, N_SAMPLES)
x_all = np.stack((x_feature1, x_feature2), axis=1)

reg = Regression()
x_all_feat = reg.construct_polynomial_feats(x_all, POLY_DEGREE)
x_cart_flat = []
for i in range(x_all_feat.shape[0]):
    point = x_all_feat[i]
    x1 = point[:,0]
    x2 = point[:,1]
    x1_end = x1[-1]
    x2_end = x2[-1]
    x1 = x1[:-1]
    x2 = x2[:-1]
    x3 = np.asarray([[m*n for m in x1] for n in x2])

    x3_flat = list(np.reshape(x3, (x3.shape[0] ** 2)))
    x3_flat.append(x1_end)
    x3_flat.append(x2_end)
    x3_flat = np.asarray(x3_flat)
    x_cart_flat.append(x3_flat)
  
x_cart_flat = np.asarray(x_cart_flat)
x_cart_flat = (x_cart_flat - np.mean(x_cart_flat)) / np.std(x_cart_flat)  # Normalize
x_all_feat = np.copy(x_cart_flat)

# We must add noise to data, else the data will look unrealistically perfect.
y_noise = rng.randn(x_all_feat.shape[0], 1)
y_all = np.dot(x_cart_flat, true_weight) + y_noise
print("x_all: ", x_all.shape[0], " (rows/samples) ", x_all.shape[1], " (columns/features)", sep="")
print("y_all: ", y_all.shape[0], " (rows/samples) ", y_all.shape[1], " (columns/features)", sep="")
x_all: 1200 (rows/samples) 2 (columns/features)
y_all: 1200 (rows/samples) 1 (columns/features)
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
fig = plt.figure(figsize=(8,5), dpi=120)
ax = fig.add_subplot(111, projection='3d')

p = np.reshape(np.dot(x_cart_flat, true_weight), (N_SAMPLES,))
ax.scatter(x_all[:,0], x_all[:,1], y_all, label='Datapoints', s=4, alpha=0.2)
ax.plot(x_all[:,0], x_all[:,1], p, label='Line of Best Fit', c="red", linewidth=2)
ax.set_xlabel("feature 1")
ax.set_ylabel("feature 2")
ax.set_zlabel("y")

if not STUDENT_VERSION:
    ax.text2D(0.5, 0.5, EO_TEXT, transform=ax.transAxes,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.4, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)

ax.legend()
ax.text2D(0.05, 0.95, "All Simulated Datapoints", transform=ax.transAxes)
plt.show()

In the figure above, the red curve is the true fuction we want to learn, while the blue dots are the noisy data points. The data points are generated by $Y=X\theta+\epsilon$, where $\epsilon_i \sim N(0,1)$ are i.i.d. generated noise.

Now let's split the data into two parts, the training set and testing set. The yellow dots are for training, while the black dots are for testing.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
PERCENT_TRAIN = 0.8

all_indices = rng.permutation(N_SAMPLES)  # Random indicies
train_indices = all_indices[:round(N_SAMPLES * PERCENT_TRAIN)]  # 80% Training
test_indices = all_indices[round(N_SAMPLES * PERCENT_TRAIN):]  # 20% Testing

xtrain = x_all[train_indices]
ytrain = y_all[train_indices]
xtest = x_all[test_indices]
ytest = y_all[test_indices]

# -- Plotting Code --
fig = plt.figure(figsize=(8,5), dpi=120)
ax = fig.add_subplot(111, projection='3d')

ax.scatter(xtrain[:,0], xtrain[:,1], ytrain, label='Training', c='y',s=4)
ax.scatter(xtest[:,0], xtest[:,1], ytest, label='Testing', c='black',s=4)
ax.set_xlabel("feature 1")
ax.set_ylabel("feature 2")
ax.set_zlabel("y")

if not STUDENT_VERSION:
    ax.text2D(0.5, 0.5, EO_TEXT, transform=ax.transAxes,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.4, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)
    
ax.legend(loc = 'upper right')
plt.show()

Now let us train our model using the training set and see how our model performs on the testing set. Observe the red line, which is our model's learned function.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

# Required for both Grad and Undergrad

weight = reg.linear_fit_closed(x_all_feat[train_indices], y_all[train_indices])
y_test_pred = reg.predict(x_all_feat[test_indices], weight)
test_rmse = reg.rmse(y_test_pred, y_all[test_indices])
print('Linear (closed) RMSE: %.4f' % test_rmse)

# -- Plotting Code --
fig = plt.figure(figsize=(8,5), dpi=120)
ax = fig.add_subplot(111, projection='3d')

y_pred = reg.predict(x_all_feat, weight)
y_pred = np.reshape(y_pred, (y_pred.size,))
ax.plot(x_all[:,0], x_all[:,1], y_pred, label='Trendline', color='r', lw=2, zorder=5)

ax.scatter(xtrain[:,0], xtrain[:,1], ytrain, label='Training', c='y',s=4)
ax.scatter(xtest[:,0], xtest[:,1], ytest, label='Testing', c='black',s=4)
ax.set_xlabel("feature 1")
ax.set_ylabel("feature 2")
ax.set_zlabel("y")

if not STUDENT_VERSION:
    ax.text2D(0.5, 0.5, EO_TEXT, transform=ax.transAxes,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.4, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)

ax.text2D(0.05, 0.95, "Linear (Closed)", transform=ax.transAxes)
ax.legend(loc = 'upper right')
plt.show()
Linear (closed) RMSE: 0.9097

HINT: If your RMSE is off, make sure to follow the instruction given for linear_fit_closed in the list of functions to implement above.

Now let's use our linear gradient descent function with the same setup. Observe that the trendline is now less optimal, and our RMSE decreased. Do not be alarmed.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

# Required for Grad Only
# This cell may take more than 1 minute

weight, _ = reg.linear_fit_GD(x_all_feat[train_indices],
                           y_all[train_indices],
                           epochs=50000,
                           learning_rate=1e-8)
y_test_pred = reg.predict(x_all_feat[test_indices], weight)
test_rmse = reg.rmse(y_test_pred, y_all[test_indices])
print('Linear (GD) RMSE: %.4f' % test_rmse)

# -- Plotting Code --
fig = plt.figure(figsize=(8,5), dpi=120)
ax = fig.add_subplot(111, projection='3d')

y_pred = reg.predict(x_all_feat, weight)
y_pred = np.reshape(y_pred, (y_pred.size,))
ax.plot(x_all[:,0], x_all[:,1], y_pred, label='Trendline', color='r', lw=2, zorder=5)

ax.scatter(xtrain[:,0], xtrain[:,1], ytrain, label='Training', c='y',s=4)
ax.scatter(xtest[:,0], xtest[:,1], ytest, label='Testing', c='black',s=4)
ax.set_xlabel("feature 1")
ax.set_ylabel("feature 2")
ax.set_zlabel("y")

if not STUDENT_VERSION:
    ax.text2D(0.5, 0.5, EO_TEXT, transform=ax.transAxes,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.4, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)

ax.text2D(0.05, 0.95, "Linear (GD)", transform=ax.transAxes)
ax.legend(loc = 'upper right')
plt.show()
Linear (GD) RMSE: 5.1484

We must tune our epochs and learning_rate. As we tune these parameters our trendline will approach the trendline generated by the linear closed form solution. Observe how we slowly tune (increase) the epochs and learning_rate below to create a better model.

Note that the closed form solution will always give the most optimal/overfit results. We cannot outperform the closed form solution with GD. We can only approach closed forms level of optimality/overfitness. We leave the reasoning behind this as an exercise to the reader.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

# Required for Grad Only
# This cell may take more than 1 minute

learning_rates = [1e-8, 1e-6, 1e-4]
weights = np.zeros((3, POLY_DEGREE ** 2 + 2))

for ii in range(len(learning_rates)):
    weights[ii,:] = reg.linear_fit_GD(x_all_feat[train_indices],
                                      y_all[train_indices],
                                      epochs=50000,
                                      learning_rate=learning_rates[ii])[0].ravel()
    y_test_pred = reg.predict(x_all_feat[test_indices],
                              weights[ii, :].reshape((POLY_DEGREE ** 2 + 2, 1)))
    test_rmse = reg.rmse(y_test_pred, y_all[test_indices])
    print('Linear (GD) RMSE: %.4f (learning_rate=%s)' % (test_rmse, learning_rates[ii]))

# -- Plotting Code --
fig = plt.figure(figsize=(8,5), dpi=120)
ax = fig.add_subplot(111, projection='3d')

colors = ['g', 'orange', 'r']
for ii in range(len(learning_rates)):
    y_pred = reg.predict(x_all_feat, weights[ii])
    y_pred = np.reshape(y_pred, (y_pred.size,))
    ax.plot(x_all[:,0], x_all[:,1], y_pred,
            label='Trendline LR=' + str(learning_rates[ii]),
            color=colors[ii], lw=2, zorder=5)

ax.scatter(xtrain[:,0], xtrain[:,1], ytrain, label='Training', c='y',s=4)
ax.scatter(xtest[:,0], xtest[:,1], ytest, label='Testing', c='black',s=4)
ax.set_xlabel("feature 1")
ax.set_ylabel("feature 2")
ax.set_zlabel("y")

if not STUDENT_VERSION:
    ax.text2D(0.5, 0.5, EO_TEXT, transform=ax.transAxes,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.4, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)
    
ax.text2D(0.05, 0.95, "Tuning Linear (GD)", transform=ax.transAxes)
ax.legend(loc = 'upper right')
plt.show()
Linear (GD) RMSE: 5.1484 (learning_rate=1e-08)
Linear (GD) RMSE: 3.3447 (learning_rate=1e-06)
Linear (GD) RMSE: 1.1079 (learning_rate=0.0001)

And what if we just use the first 10 data points to train?

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
rng = np.random.RandomState(seed=5)
y_all_noisy = np.dot(x_cart_flat, np.zeros((POLY_DEGREE ** 2 + 2, 1))) + rng.randn(x_all_feat.shape[0], 1)
sub_train = train_indices[10:20]
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

# Required for both Grad and Undergrad

weight = reg.linear_fit_closed(x_all_feat[sub_train], y_all_noisy[sub_train])
y_pred = reg.predict(x_all_feat, weight)
y_test_pred = reg.predict(x_all_feat[test_indices], weight)
test_rmse = reg.rmse(y_test_pred, y_all_noisy[test_indices])
print('Linear (closed) 10 Samples RMSE: %.4f' % test_rmse)

# -- Plotting Code --
fig = plt.figure(figsize=(8,5), dpi=120)
ax = fig.add_subplot(111, projection='3d')

x1 = x_all[:,0]
x2 = x_all[:,1]
y_pred = np.reshape(y_pred, (N_SAMPLES,))
ax.plot(x1, x2, y_pred, color='b', lw=4)

x3 = x_all[sub_train,0]
x4 = x_all[sub_train,1]
ax.scatter(x3, x4, y_all_noisy[sub_train], s=100, c='r', marker='x')

y_test_pred = reg.predict(x_all_feat[test_indices], weight)
ax.set_xlabel("feature 1")
ax.set_ylabel("feature 2")
ax.set_zlabel("y")
ax.set_zlim([None, 8])

if not STUDENT_VERSION:
    ax.text2D(0.5, 0.5, EO_TEXT, transform=ax.transAxes,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.4, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)

ax.text2D(0.05, 0.95, "Linear Regression (Closed)", transform=ax.transAxes)
Linear (closed) 10 Samples RMSE: 2457.9318
Out[ ]:
Text(0.05, 0.95, 'Linear Regression (Closed)')

Did you see a worse performance? Let's take a closer look at what we have learned.

3.4 Testing: Testing ridge regression [5 pts] **[W]**¶

Now let's try ridge regression. Like before, undergraduate students need to implement the closed form, and graduate students need to implement all three methods. We will call the prediction function from linear regression part. As long as your test RMSE score is close to the TA's answer (TA's answer $\pm 0.05$), you can get full points.

Again, let's see what we have learned. You only need to run the cell corresponding to your specific implementation.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

# Required for both Grad and Undergrad

weight = reg.ridge_fit_closed(x_all_feat[sub_train],
                              y_all_noisy[sub_train],
                              c_lambda=10)
y_pred = reg.predict(x_all_feat, weight)
y_test_pred = reg.predict(x_all_feat[test_indices], weight)
test_rmse = reg.rmse(y_test_pred, y_all_noisy[test_indices])
print('Ridge Regression (closed) RMSE: %.4f' % test_rmse)

# -- Plotting Code --
fig = plt.figure(figsize=(8,5), dpi=120)
ax = fig.add_subplot(111, projection='3d')

x1 = x_all[:,0]
x2 = x_all[:,1]
y_pred = np.reshape(y_pred, (N_SAMPLES,))
ax.plot(x1, x2, y_pred, color='b', lw=4)

x3 = x_all[sub_train,0]
x4 = x_all[sub_train,1]
ax.scatter(x3, x4, y_all_noisy[sub_train], s=100, c='r', marker='x')

y_test_pred = reg.predict(x_all_feat[test_indices], weight)
ax.set_xlabel("feature 1")
ax.set_ylabel("feature 2")
ax.set_zlabel("y")

if not STUDENT_VERSION:
    ax.text2D(0.5, 0.5, EO_TEXT, transform=ax.transAxes,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.4, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)
ax.set_zlim([None, 8])
ax.text2D(0.05, 0.95, "Ridge Regression (Closed)", transform=ax.transAxes)
Ridge Regression (closed) RMSE: 1.4765
Out[ ]:
Text(0.05, 0.95, 'Ridge Regression (Closed)')

HINT: Make sure to follow the instruction given for ridge_fit_closed in the list of functions to implement above.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

# Required for Grad Only

weight, _ = reg.ridge_fit_GD(x_all_feat[sub_train],
                          y_all_noisy[sub_train],
                          c_lambda=20, learning_rate=1e-5)
y_pred = reg.predict(x_all_feat, weight)
y_test_pred = reg.predict(x_all_feat[test_indices], weight)
test_rmse = reg.rmse(y_test_pred, y_all_noisy[test_indices])
print('Ridge Regression (GD) RMSE: %.4f' % test_rmse)

# -- Plotting Code --
fig = plt.figure(figsize=(8,5), dpi=120)
ax = fig.add_subplot(111, projection='3d')

x1 = x_all[:,0]
x2 = x_all[:,1]
y_pred = np.reshape(y_pred, (N_SAMPLES,))
ax.plot(x1, x2, y_pred, color='b', lw=4)

x3 = x_all[sub_train,0]
x4 = x_all[sub_train,1]
ax.scatter(x3, x4, y_all_noisy[sub_train], s=100, c='r', marker='x')

y_test_pred = reg.predict(x_all_feat[test_indices], weight)
ax.set_xlabel("feature 1")
ax.set_ylabel("feature 2")
ax.set_zlabel("y")
ax.set_zlim([None, 8])

if not STUDENT_VERSION:
    ax.text2D(0.5, 0.5, EO_TEXT, transform=ax.transAxes,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.4, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)

ax.text2D(0.05, 0.95, "Ridge Regression (GD)", transform=ax.transAxes)
Ridge Regression (GD) RMSE: 0.9422
Out[ ]:
Text(0.05, 0.95, 'Ridge Regression (GD)')
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

# Required for Grad Only

weight, _ = reg.ridge_fit_SGD(x_all_feat[sub_train],
                           y_all_noisy[sub_train],
                           c_lambda=20,
                           learning_rate=1e-5)
y_pred = reg.predict(x_all_feat, weight)
y_test_pred = reg.predict(x_all_feat[test_indices], weight)
test_rmse = reg.rmse(y_test_pred, y_all_noisy[test_indices])
print('Ridge Regression (SGD) RMSE: %.4f' % test_rmse)


# -- Plotting Code --
fig = plt.figure(figsize=(8,5), dpi=120)
ax = fig.add_subplot(111, projection='3d')

x1 = x_all[:,0]
x2 = x_all[:,1]
y_pred = np.reshape(y_pred, (N_SAMPLES,))
ax.plot(x1, x2, y_pred, color='b', lw=4)

x3 = x_all[sub_train,0]
x4 = x_all[sub_train,1]
ax.scatter(x3, x4, y_all_noisy[sub_train], s=100, c='r', marker='x')

y_test_pred = reg.predict(x_all_feat[test_indices], weight)
ax.set_xlabel("feature 1")
ax.set_ylabel("feature 2")
ax.set_zlabel("y")
ax.set_zlim([None, 8])

if not STUDENT_VERSION:
    ax.text2D(0.5, 0.5, EO_TEXT, transform=ax.transAxes,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.4, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)
    
ax.text2D(0.05, 0.95, "Ridge Regression (SGD)", transform=ax.transAxes)
Ridge Regression (SGD) RMSE: 0.9453
Out[ ]:
Text(0.05, 0.95, 'Ridge Regression (SGD)')

3.5 Cross validation [7 pts] **[W]**¶

Let's use Cross Validation to search for the best value for c_lambda in ridge regression.

Imagine we have a dataset of 10 points [1,2,3,4,5,6,7,8,9,10] and we want to do 5-fold cross validation.

  • The first iteration we would train with [3,4,5,6,7,8,9,10] and test (validate) with [1,2]
  • The second iteration we would train with [1,2,5,6,7,8,9,10] and test (validate) with [3,4]
  • The third iteration we would train with [1,2,3,4,7,8,9,10] and test (validate) with [5,6]
  • The fourth iteration we would train with [1,2,3,4,5,6,9,10] and test (validate) with [7,8]
  • The fifth iteration we would train with [1,2,3,4,5,6,7,8] and test (validate) with [9,10]

We provided a list of possible values for $\lambda$, and you will use them in cross validation. For cross validation, use 10-fold method and only use it for your training data (you already have the train_indices to get training data). For the training data, split them in 10 folds which means that use 10 percent of training data for test and 90 percent for training. For each $\lambda$, you will have calculated 10 RMSE values. Compute the mean of the 10 RMSE values. Then pick the $\lambda$ with the lowest mean RMSE.

HINTS:

  • np.concatenate is your friend
  • Make sure to follow the instruction given for ridge_fit_closed in the list of functions to implement above.
  • To use the 10-fold method, that would include looping over all the data 10 times, where we split a different 10% of the data at every iteration. So the first iteration extracts the first 10% to testing and the remaining 90% for training.The second iteration splits the second 10% of data for testing and the (different) remaining 90% for testing. If we have the array of elements 1 - 10, the second iteration would extract the number "2" because that's in the second 10% of the array.
  • The hyperparameter_search function will handle averaging the errors, so don't average the errors in ridge_cross_validation. We've done this so you can see your error across every fold when using the gradescope tests.
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
lambda_list = [0.0001, 0.001, 0.1, 1, 5, 10, 50, 100, 1000, 10000]
kfold = 10

best_lambda, best_error, error_list = reg.hyperparameter_search(x_all_feat[train_indices], y_all[train_indices], lambda_list, kfold)
for lm, err in zip(lambda_list, error_list):
    print('Lambda: %.4f' % lm, 'RMSE: %.6f'% err)

print('Best Lambda: %.4f' % best_lambda)
weight = reg.ridge_fit_closed(x_all_feat[train_indices], y_all_noisy[train_indices], c_lambda=best_lambda)
y_test_pred = reg.predict(x_all_feat[test_indices], weight)
test_rmse = reg.rmse(y_test_pred, y_all_noisy[test_indices])
print('Best Test RMSE: %.4f' % test_rmse) 
[0.811491550945339, 0.6669258088658356, 0.9508271360245939, 1.14463530011988, 0.6071605577329096, 1.2062030475234042, 0.8261733192918465, 0.7075561611438719, 0.9164684269193096, 1.1618082156758331, 0.9535405104555694, 0.3480982538084773, 1.1576928691677277, 1.3234729581281135, 0.810608657921373, 1.1437500913549872, 0.9442008644288299, 0.8942588932760944, 0.6696719236989197, 0.6850172700722444, 0.8156878457607927, 1.1237432230987765, 1.5424979787691353, 1.0143856122609574, 1.127503700572208, 0.655211476866808, 1.1578427357935221, 0.915494385485225, 1.0958628314077778, 0.9109137782485374, 0.9946172502651528, 0.8096558509661422, 0.8772370629321309, 0.6064935086963121, 0.9259480303234838, 1.1921941685205664, 0.8133689735819705, 0.8554097249946029, 1.0576838976607745, 1.2402288359867408, 0.49153492577798963, 0.9552508805089576, 0.8048769809512514, 1.089137249205119, 0.7806379564286133, 0.7550100005220254, 0.9768676666417271, 1.0287228859733981, 0.9753773141403482, 0.7327480587808953, 0.9036693628348468, 0.9530322566656773, 1.1040781336074592, 0.9057299811354684, 0.8137222949639406, 1.0791255513877624, 1.0385876767222675, 0.9960425964298105, 0.7429699200382179, 1.2023152896509155, 0.7539592571083706, 1.3437217829677008, 1.467934188123442, 1.4079180734004308, 1.2303841825750146, 1.004326484143661, 1.108270305717651, 0.7032933319785692, 1.0518008790282556, 0.8464484405984092, 1.2882048457534203, 1.0910406626909699, 0.8306218056389133, 0.8168780773546899, 1.038010825833284, 0.913858606747357, 0.9991192051960515, 0.868966638296781, 0.8851907097541045, 1.2142981829848187, 1.0458838635157384, 1.0851162911057692, 1.0380492660674707, 0.5070212942570792, 1.3062221860939383, 1.2200662643563116, 1.0623275815225568, 0.5564667910824239, 1.143666679385172, 0.9451701565114085, 0.7204939003480181, 1.4305350617549482, 1.107731171397156, 1.2040653388146334, 0.7302758915407792, 1.2925359732043822]
[0.8301787760790833, 0.6727872366286658, 0.9900862222114767, 1.1640537561980306, 0.6289687843959455, 1.2202579535359566, 0.8290816100045416, 0.7194102718565866, 0.8926085663735269, 1.1896716911906837, 0.9519854157129913, 0.36083182802609937, 1.1599399198397142, 1.345720676867437, 0.806942289923985, 1.142408758150028, 0.9502608838350033, 0.8817399197868557, 0.6686433069551707, 0.688967224076732, 0.8149022687665616, 1.1326126217558175, 1.5357302628971208, 1.0131546628136976, 1.1282949687531578, 0.6568199448078634, 1.1629497272590836, 0.9151165125361826, 1.0961131037104106, 0.9213313281491639, 0.9888034746000041, 0.8127707068776219, 0.8715562907936459, 0.6090509893931433, 0.9239902830899749, 1.1904077163800098, 0.810093242461437, 0.8574169939962081, 1.0590933391099797, 1.2389477331763448, 0.4912104918875818, 0.9556997544830185, 0.806442144757738, 1.0909589638097816, 0.7841241928810054, 0.7546028401621027, 0.9775483536656199, 1.0317807778756904, 0.9773327298970669, 0.7332370429920261, 0.8970437555041635, 0.9633841185724116, 1.099230581412087, 0.909147947483287, 0.8090201630263341, 1.0776271819896364, 1.038629873321907, 0.9991929446698, 0.7440231738162375, 1.2068588587371165, 0.7572426591115639, 1.341173573276121, 1.4690157573517992, 1.4081927933889336, 1.2369656529249904, 1.0026778688114726, 1.1087003106390145, 0.7059338109648858, 1.055086499957528, 0.8493431434340802, 1.289389341428375, 1.0941138898088758, 0.8271172638862166, 0.8188406655414845, 1.043871389193659, 0.9168114950242362, 0.9983526617559414, 0.8713050493786028, 0.883121339724861, 1.2077155169442253, 1.0499637453535944, 1.0862645560314943, 1.0343445807032634, 0.5116406910482523, 1.3074548625690012, 1.2156743588269276, 1.059886354790177, 0.5597021859729697, 1.1414657872076233, 0.9432243424368699, 0.7245053998150487, 1.427998004094954, 1.1083903559545902, 1.2069877831633389, 0.7310904045014396, 1.2947502402901152]
[0.8814882838074498, 0.6849242757768881, 1.0066335071416372, 1.167922792030596, 0.6409740824571172, 1.2456167114331542, 0.8348238770534702, 0.7310563455230268, 0.8758833071345368, 1.1980354836397509, 0.9481161072993739, 0.3724332551945286, 1.1578917572970433, 1.3649202418999564, 0.8086914147932558, 1.146910234820403, 0.9600936842644191, 0.8689134378340884, 0.6670651547058222, 0.6972024458993261, 0.8163577112536082, 1.153094770679201, 1.523652697773029, 1.0065130411629368, 1.1273777210618399, 0.6604491121622839, 1.1780900018571592, 0.9156264566709433, 1.0940955223056121, 0.9333709393327869, 0.9772188427742994, 0.8148349322405113, 0.8619112624223, 0.6121988552774502, 0.9208894852774273, 1.1892870893475402, 0.8091291121998787, 0.8605339383057508, 1.065616703830173, 1.2389845290083525, 0.49224484123173046, 0.9565415327219255, 0.8099671852342515, 1.101177615646154, 0.7899957960499138, 0.7567603382827666, 0.9820440753653195, 1.0434565406031684, 0.9892124951994792, 0.7315137544165119, 0.8801072127091587, 0.9800665717953618, 1.0913131337855417, 0.9129895407670118, 0.7976777866551021, 1.0761860411961042, 1.0396814298473795, 1.012311176980521, 0.7493625383135388, 1.2196119275480963, 0.7679634056940897, 1.3336736386774355, 1.4719766825246323, 1.399181771681808, 1.256786092172895, 1.0013639775787235, 1.110955116821702, 0.7099383552291187, 1.06045783345046, 0.8518525437703343, 1.2925121162876287, 1.1045675058241569, 0.8186552428771067, 0.8304888878273132, 1.0611990714936885, 0.9225062567330055, 1.000005456059947, 0.8843703629274436, 0.8744620398738855, 1.1901053500110665, 1.0609694317956428, 1.090291513807569, 1.022095779856694, 0.5299991905467148, 1.3048264305299868, 1.1957893831466069, 1.0544791189207974, 0.5716351564781373, 1.127181079785688, 0.9372300652661456, 0.7479536677607849, 1.4222062755012272, 1.1150656468746898, 1.2075139877015992, 0.721408312184304, 1.2972271678500389]
[0.8888321998020547, 0.7065968462891123, 1.0068533467381284, 1.1644507404469913, 0.6608303527088534, 1.2587542744985667, 0.841570226462122, 0.7215291130044343, 0.8762621831889893, 1.1940207360558286, 0.9661378092472167, 0.37038662492319746, 1.1484748457443412, 1.3659880901036938, 0.810079563840059, 1.1599241566115372, 0.9600323491159923, 0.8709171189167192, 0.6617110119060008, 0.7047066554354343, 0.8231813733517493, 1.173971325114231, 1.5246789779859704, 1.0194318504272057, 1.1206139502376427, 0.6682933926052971, 1.2008529310176848, 0.916961528435051, 1.0877993494407203, 0.9306094082388242, 0.9707139932983708, 0.8082066722591941, 0.8564778221624663, 0.616225005509389, 0.9267160798372401, 1.1986507526544565, 0.8298938062054049, 0.8622340689593592, 1.0722297276431494, 1.2399244754870977, 0.5007190992999847, 0.9505019153142541, 0.8173988007236643, 1.109938922488876, 0.7936081840240222, 0.7617429447958892, 0.9819136054642591, 1.0415841413668459, 1.0020717173019218, 0.7375694040821117, 0.8745149930824394, 0.9559641108313346, 1.082155917718709, 0.901986911973996, 0.7886184422146914, 1.074633348554381, 1.041671230478129, 1.0187936936480202, 0.7584196179889753, 1.2126894678271547, 0.7744883070159525, 1.3430081732183303, 1.4730616501808866, 1.3983065288252126, 1.2460763203021143, 1.012489439323836, 1.1120022172265693, 0.7033195409971801, 1.0647910004057932, 0.851360835258825, 1.2935348813698024, 1.0999624349038075, 0.8142305124210653, 0.8391403739040989, 1.065845270014503, 0.9311834893462506, 0.9961043972660668, 0.8880728107676534, 0.8719106719165514, 1.1831530145550913, 1.0524525543145906, 1.0887796494435769, 1.020796824705534, 0.540104459446541, 1.3046698127293646, 1.1906205733955029, 1.0546141152665007, 0.5742665021730479, 1.128637883972424, 0.9354095223071065, 0.7508229429545376, 1.4195322874188525, 1.107664746612234, 1.207068216459787, 0.7169182242867724, 1.2988217066426684]
[0.9519541921449037, 0.7296710338016169, 1.0173659669771027, 1.15964116729204, 0.6660796587591111, 1.255326444593103, 0.8438793947474813, 0.7227528473301642, 0.8744681660926414, 1.1870235102638433, 0.9896105306225007, 0.37276386913473775, 1.1455499071615747, 1.3685063609993573, 0.8156715744055109, 1.1717359259963844, 0.9567081745509893, 0.8715706969668722, 0.6568520443631486, 0.7156150071871454, 0.8346797839584372, 1.1980331016262769, 1.524459420005549, 1.0352193213044647, 1.1111633703313533, 0.6797374860015386, 1.2298950769766277, 0.9194918511274687, 1.078412614039728, 0.9291512194979031, 0.9629924227424334, 0.7984700897082079, 0.8489203441975409, 0.6236008960328088, 0.9373573847373938, 1.212643741729251, 0.8644543888765139, 0.8655097864223013, 1.082414646319574, 1.2448362154893835, 0.5162235229546656, 0.9309964447867567, 0.8277456652837075, 1.1094810372477204, 0.7992462711803624, 0.7690531155944286, 0.9738252650944483, 1.0403560690622824, 1.0186002604617677, 0.7439827636499543, 0.8712438673757867, 0.9283762942854127, 1.0604903438862614, 0.8901549351726662, 0.7836837541441071, 1.0752831788372577, 1.05388816121266, 1.0183103044986912, 0.7699497999001408, 1.2050189140273482, 0.782070151297367, 1.355527765957771, 1.4680894559168334, 1.4048410386097339, 1.2296537379653314, 1.0296158454060245, 1.1139298228723786, 0.6966363992300975, 1.0667513635197645, 0.8555067330131677, 1.2975026834610803, 1.0941779175761674, 0.8080433040627987, 0.8484120877436269, 1.0733838113853251, 0.9437427721164983, 0.994999321016442, 0.8924459907294661, 0.8665617638323654, 1.1759995629602227, 1.0365862488313262, 1.0872359522106279, 1.0276917382300967, 0.5573878187713389, 1.3039336555781182, 1.1833592398843047, 1.0538244950838367, 0.5776479926421321, 1.1339894975140596, 0.9324770134476942, 0.7611467603528673, 1.4176825920382208, 1.0906587537191865, 1.2056381810227403, 0.7084365362014656, 1.3032364520440558]
[1.0424388836153051, 0.7703669913801148, 1.0321056393272836, 1.154841327591159, 0.6651822790383655, 1.2504020129474216, 0.8438062049174115, 0.7303689889158133, 0.8723446734986896, 1.180831539662296, 1.0060773999079757, 0.38052829701368457, 1.1467267131951022, 1.3696435965362896, 0.8198395466138328, 1.1775368965636048, 0.9537386627351337, 0.8734107743062478, 0.6559813487058498, 0.7210442430315824, 0.8399588550839259, 1.205201382390281, 1.5217431587938477, 1.0338819984466627, 1.1087371102993069, 0.6842940245257915, 1.2397042284661377, 0.9205587738985109, 1.0754177303013452, 0.9289509361547511, 0.9604968253010423, 0.793343857888941, 0.8467483057105101, 0.6273729402961602, 0.9409754071991514, 1.2183998908716762, 0.8778599505528097, 0.8672308813358434, 1.0870577109513067, 1.2473669736056783, 0.523717084785804, 0.9251166642346657, 0.8318221120600017, 1.1086708676674784, 0.801585511511928, 0.7722407670615304, 0.9705559104963638, 1.0376917457880683, 1.0243790326510243, 0.7460371969390632, 0.8714233020381998, 0.9209977585887883, 1.0493247250971516, 0.8861439175825597, 0.7833534943777686, 1.0760319650535157, 1.060234198854311, 1.0175599550859127, 0.7750310571679491, 1.2017569794993905, 0.7848777518494906, 1.3613570702390392, 1.4650115555353955, 1.4074819902667568, 1.2226530936771747, 1.0372789503176165, 1.1148683498931111, 0.6945635788024975, 1.0654069532460138, 0.8595066349333593, 1.2997426554987561, 1.0922535787702112, 0.8054796407956359, 0.8513584397179544, 1.0769667142611603, 0.9493315080588195, 0.9956317425973901, 0.8954779107706045, 0.8639932984266188, 1.174682346517534, 1.0285673239829614, 1.0877226232844452, 1.0324355194754822, 0.5659602222828263, 1.303553012436897, 1.1799757272652573, 1.0531171229449021, 0.5793664818589916, 1.1379606420514472, 0.9311089142626032, 0.7681721528984553, 1.4173698085026891, 1.0812994248600352, 1.2048565013655677, 0.7039127539755958, 1.3062001479436744]
[1.403619326916448, 1.235858857768969, 1.1634332405864551, 1.1306818998871886, 0.6973396761573548, 1.3107473838470176, 0.8418148793491385, 0.8104541966552077, 0.8689251087823024, 1.2033274664704199, 1.1450592471959353, 0.4620678143148055, 1.16513166767468, 1.375973628716869, 0.8715278588144021, 1.2122444959979493, 0.936076172764061, 0.8946829908123443, 0.6623444287787689, 0.7523871963547241, 0.8618893134761094, 1.2149935818331723, 1.4970238249108438, 0.9736142751816286, 1.1109416173677686, 0.6954645953165638, 1.254263946376448, 0.9223901555646402, 1.0723956211107069, 0.9279648037921434, 0.9565026451598938, 0.7712695222121844, 0.8476924616528261, 0.643032355192015, 0.9412532000760978, 1.2348184945329295, 0.8996689255130899, 0.8713233451481541, 1.0970460909840811, 1.2526173452327325, 0.5414218990282162, 0.9413832610651846, 0.8389752185677, 1.1151483025389126, 0.8052771160833115, 0.7796974183760448, 0.9730840253011851, 1.0077081291576109, 1.027745262031362, 0.7455025852316228, 0.8776610080073559, 0.9334859685336544, 1.0196788199543394, 0.8792171735352957, 0.7859913369924518, 1.0768145719586124, 1.0667375715968237, 1.0274029285702813, 0.7875275719583908, 1.188117199327165, 0.7877935575999022, 1.3807659655609057, 1.4621072354900944, 1.3949852841405739, 1.207683423396563, 1.0527219127246048, 1.1164050081900794, 0.6904925325258481, 1.048423206772155, 0.8795427919830098, 1.3022655590587857, 1.088850499290096, 0.800792480052866, 0.8516515363396537, 1.082666206927773, 0.9598047040756272, 0.9946333013000009, 0.912447374947712, 0.8624079957899212, 1.1859020171081451, 1.0071659360445606, 1.0990879222532504, 1.0346056707423201, 0.5806253271749166, 1.3026881483469859, 1.1729375830417013, 1.050983076060736, 0.5827182270926775, 1.156811736040791, 0.9289552356303504, 0.783521718769865, 1.4151224562732212, 1.058489684754775, 1.203350002036678, 0.6931898218114358, 1.3161066497934601]
[1.6665302182165265, 1.6497091486480993, 1.325910452662775, 1.117449952580199, 0.7945781204849516, 1.4847560395351653, 0.8419056781005881, 0.9257144370704528, 0.8851205792771848, 1.3388346730790888, 1.3504373323402479, 0.579771654210081, 1.1941077986614024, 1.3850258855644617, 0.9826498422913208, 1.2600867090374157, 0.9239976518993588, 0.9293499432238639, 0.6810127589301315, 0.8014540612239017, 0.8860348198856574, 1.2234454283218672, 1.4760392307420789, 0.9045364413826533, 1.117366050957365, 0.7080404761640683, 1.2625006065166429, 0.9233854648998658, 1.0723512765347887, 0.9278088637332516, 0.9547136507235471, 0.7538475549998231, 0.8537602478510188, 0.662616349658572, 0.9366312466180244, 1.2536518015825084, 0.9130414475265888, 0.8748523022412638, 1.1079423735536358, 1.2579494256487254, 0.5550241966523064, 0.9764486738910289, 0.8429107735118316, 1.1261930218327774, 0.8068410685732498, 0.7902064328794903, 0.982382695594, 0.9711587802500874, 1.025106609983241, 0.7423153175847926, 0.8882549099534222, 0.965751469743744, 0.9976181063672312, 0.8755561834187293, 0.792809060395497, 1.076441269639398, 1.0648993351284501, 1.0440812877843593, 0.7972530272165197, 1.174806670187973, 0.78758528572407, 1.3993867495342434, 1.463673951955015, 1.3719972765998905, 1.2003916403533526, 1.0619231696171354, 1.1169325654306907, 0.6875867238950705, 1.0269857285274009, 0.9026488607145063, 1.3015399990706273, 1.086853330873237, 0.7982391312034803, 0.8500708672488096, 1.0844303949057505, 0.9651030673148464, 0.9910897843799364, 0.931906453822116, 0.8659557563709119, 1.2055609611507008, 0.9906732696326627, 1.115311536516537, 1.0277404482220993, 0.585707722633399, 1.3028132049332557, 1.1692621282256639, 1.0496598189313375, 0.5841791360082521, 1.1773070418301679, 0.9282897252483295, 0.792090012400766, 1.4117427337862198, 1.0429792957275699, 1.202560137882743, 0.6863402294060869, 1.3253029668720888]
[4.516775292044453, 2.60749654329638, 2.937270403366657, 1.2772865799128235, 2.5368062879695796, 2.733334248810626, 0.9801422522503067, 2.631940178960441, 1.441770531018663, 3.0584592567164908, 3.4427942733272134, 2.134503678651711, 1.73225008327256, 1.5725678988822287, 3.0284919431978152, 2.1009125238583986, 1.3529948645371792, 1.6496340085000432, 1.3065441335568202, 1.828918917293912, 1.2514707110740801, 1.659561626181033, 2.01868133345769, 1.6662661649467905, 1.2280442587392146, 1.1176259158057493, 1.4914684134093654, 0.9402080374352757, 1.0923766749159844, 1.0526723021600928, 0.987871833416973, 1.1875717440828044, 1.118061008561994, 1.1582096245292617, 0.9048365609172103, 1.873082998324198, 1.3069840648451638, 0.9823978846716249, 1.6258241472196082, 1.5322950818422187, 0.78880231056879, 1.740688925689507, 0.896750054599849, 1.3122000975308552, 0.8278034543841898, 1.399113588099828, 1.1723878145014943, 0.9079497160419476, 1.1837185848576468, 0.8199426375823267, 1.3066623599576783, 1.76168325096889, 1.0187232868469613, 0.8844090974405193, 1.26058486032252, 1.0699962364375155, 1.0352890734732574, 1.3629750246207084, 0.9775600090735236, 1.2064123733168661, 0.8199008926631967, 1.6976124514991882, 1.5028896797797404, 1.0715573098914877, 1.5327467617467778, 1.1930996789834054, 1.1230260009145683, 0.6665529854712283, 0.7919199995774889, 1.4058647896163092, 1.2940414100125888, 1.087827417927775, 0.7702588987348277, 1.217365619574077, 1.100628560874432, 1.0543485094669973, 0.9357413479905012, 1.2800677301632803, 1.0748112721659593, 1.6814579074804812, 0.834165718002062, 1.4535942835963838, 0.9461051315660295, 0.6525587446290467, 1.4486472848236216, 1.2136945245414514, 1.1034252633671955, 0.604742815133268, 1.5547526345322273, 0.923021409108065, 0.9922880977308342, 1.3615866849309086, 0.857032041463053, 1.200218147047741, 0.6626647960819811, 1.5441725237679904]
[6.960361022315571, 2.9961524450266483, 4.795267837350674, 1.8455645859390284, 5.502466177729036, 4.212951449542058, 1.6917462699297146, 6.910354218332642, 3.2098943857090516, 4.781933431957422, 3.8319586059216113, 5.641801344738432, 3.0081395481852633, 2.104589965024763, 5.026779895060146, 2.9987248515167853, 2.958317773195459, 2.403584259403657, 2.346760513558309, 2.877882906861339, 2.1826693514565463, 3.412401023782396, 3.9686188992705977, 5.787406954939889, 1.5608879254312418, 2.2109332497892114, 2.2882063140460183, 1.1121237864750233, 1.2410856589864947, 1.947352060305991, 1.4271515286393117, 3.7645660225015076, 2.4172989778691214, 2.991446042678117, 1.1532299628201497, 4.6970447357193255, 3.106449231659682, 1.449118425027334, 3.379455754606196, 2.509782537691242, 1.6545297950310898, 4.5687393741655775, 1.2074209213502969, 1.8137324904190473, 1.0145976084195945, 4.457581024242425, 1.7121845645362634, 3.5554456412434425, 2.929687198038495, 1.5082701170225545, 2.7080886844457357, 5.161275126137099, 2.2605100940641667, 1.4731279395789822, 4.183131059771783, 1.128717283066554, 1.2251362037035927, 2.3813337071323164, 1.8048940185703337, 2.664928919259079, 1.1969230450071646, 2.990793187627935, 1.6602565004921521, 1.157385129388293, 5.089637891673959, 1.8281109660360564, 1.1993993667162568, 0.937316643047488, 1.2402631973872924, 3.5846099349121405, 1.4610346066770354, 1.4761034415454943, 0.7261860051472144, 4.436087780987464, 1.2212824094343073, 1.6205516526707424, 0.8242047779614374, 2.6367323652716497, 2.926755110375819, 3.716498669887022, 1.7749759956459583, 3.4565415317968946, 1.4893127366474608, 1.1114854170002617, 2.8963091995041506, 2.307982502426462, 1.8770552667565474, 0.7686804885281877, 3.638474705967713, 0.9226336703600773, 2.219561703864043, 1.2951219659669135, 1.04247530144859, 1.312381940303177, 1.2872799837935478, 2.8877134555499167]
Lambda: 0.0001 RMSE: 0.971363
Lambda: 0.0010 RMSE: 0.973637
Lambda: 0.1000 RMSE: 0.976874
Lambda: 1.0000 RMSE: 0.978598
Lambda: 5.0000 RMSE: 0.981447
Lambda: 10.0000 RMSE: 0.983913
Lambda: 50.0000 RMSE: 1.000595
Lambda: 100.0000 RMSE: 1.022675
Lambda: 1000.0000 RMSE: 1.402650
Lambda: 10000.0000 RMSE: 2.607749
Best Lambda: 0.0001
Best Test RMSE: 0.9452

3.6 Noisy Input Samples in Linear Regression [10 pts Bonus for All] **[W]**¶

Consider a linear model of the form: $$ y(x_n,\theta) = \theta_0 + \sum_{d=1}^D\theta_dx_{nd} $$ where $x_n = (x_{n1}, \ldots, x_{nD}) \in \mathbb{R}^{D}$ and weights $\theta = (\theta_0, \ldots, \theta_D) \in \mathbb{R}^{D}$. Given the the D-dimension input sample set $x = \{ x_1, \ldots, x_n\}$ with corresponding target value $y = \{y_1, \ldots, y_n\}$, the sum-of-squares error function is: $$ E_D(\theta) = \frac{1}{2}\sum_{n=1}^N\left[y(x_n,\theta)-y_n\right]^2 $$

Now, suppose that Gaussian noise $\epsilon_n \in \mathbb{R}^{D}$ is added independently to each of the input sample $x_n$ to generate a new sample set $x'= \{x_1+\epsilon_1, \ldots, x_n+\epsilon_n\}$. Here, $\epsilon_{ni}$ (an entry of $\epsilon_n$) has zero mean and variance $\sigma^2$. For each sample $x_n$, let $x_n' = (x_{n1} + \epsilon_{n1}, \ldots, x_{nD} + \epsilon_{nd})$, where $n$ and $d$ is independent across both $n$ and $d$ indices.

  1. (3pts) Show that $y(x_n',\theta) = y(x_n, \theta) + \sum^D_{d=1}\theta_d\epsilon_{nd}$

  2. (7pts) Assume the sum-of-squares error function of the noise sample set $x'= \{x_1+\epsilon_1, \ldots, x_n+\epsilon_n\}$ is $E_D(\theta)'$. Prove the expectation of $E_D(\theta)'$ is equivalent to the sum-of-squares error $E_D(\theta)$ for noise-free input samples with the addition of a weight-decay regularization term (e.g. $\ell_2$ norm) , in which the bias parameter $\theta_0$ is omitted from the regularizer. In other words, show that $$ E[E_D(\theta)'] = E_D(\theta) + \text{Regularizer}. $$

N.B. You should be incorporating your solution from the first part of this problem into the given sum of squares equation for the second part.

HINT:

  • During the class, we have discussed how to solve for the weight $\theta$ for ridge regression, the function looks like this: $$E(\theta)=\frac{1}{N}\sum_{i=1}^N\left[ y(x_i,\theta)-y_i \right]^2+\frac{\lambda}{N}\sum_{i=1}^d |\theta_i|^2$$ where the first term is the sum-of-squares error and the second term is the regularization term. N is the number of samples. In this question, we use another form of the ridge regression, which is: $$ E(\theta)=\frac{1}{2}\sum_{i=1}^N\left[y(x_i,\theta)-y_i \right]^2+\frac{\lambda}{2}\sum_{i=1}^d |\theta_i|^2 $$
  • For the Gaussian noise $\epsilon_n$, we have $E[\epsilon_n]=0$

  • Assume the noise $\epsilon = (\epsilon_1,..., \epsilon_n)$ are independent to each other, we have $$ E[\epsilon_n\epsilon_m]=\left\{ \begin{array}{rcl} \sigma^2 & & m = n\\ 0 & & m \neq n\\ \end{array} \right. $$

  1. Answer: ...
  1. Answer: ...

Q4: Naive Bayes and Logistic Regression [35pts] **[P]** | **[W]**¶

In Bayesian classification, we're interested in finding the probability of a label given some observed feature vector $x = [x_{1}, \ldots, x_{d}]$, which we can write as $P(y~|~{ x_{1}, \ldots, x_{d}})$. Bayes's theorem tells us how to express this in terms of quantities we can compute more directly:

$$ P(y~|~{ x_{1}, \ldots, x_{d}}) = \frac{P({ x_{1}, \ldots, x_{d}}~|~y)P(y)}{P({ x_{1}, \ldots, x_{d}})} $$

The main assumption in Naive Bayes is that, given the label, the observed features are conditionally independent i.e.

$$ P({ x_{1}, \ldots, x_{d}}~|~y) = P({x_{1}}~|~y) \times \ldots \times P({x_{d}}~|~y) $$

Therefore, we can rewrite Bayes rule as

$$ P(y~|~{ x_{1}, \ldots, x_{d}}) = \frac{P({x_{1}}~|~y) \times \ldots \times P({x_{d}}~|~y)P(y)}{P({ x_{1}, \ldots, x_{d}})} $$

Training Naive Bayes¶

One way to train a Naive Bayes classifier is done using frequentist approach to calculate probability, which is simply going over the training data and calculating the frequency of different observations in the training set given different labels. For example,
$$ P({x_{1}=i}~|~y=j) = \frac{P({x_{1}=i}, y=j)}{P(y=j)} = \frac{\text{Number of times in training data } x_{1}=i \text{ and } y=j }{\text{Total number of times in training data } y=j} $$

Testing Naive Bayes¶

During the testing phase, we try to estimate the probability of a label given an observed feature vector. We combine the probabilities computed from training data to estimate the probability of a given label. For example, if we are trying to decide between two labels $y_{1}$ and $y_{2}$, then we compute the ratio of the posterior probabilities for each label:

$$ \frac{P(y_{1}~|~ x_{1}, \ldots, x_{d})}{P(y_2~|~x_{1},.., x_{d})} = \frac{P(x_{1}, \ldots, x_{d}~|~y_{1})}{P(x_{1}, \ldots, x_{d}~|~y_{2})}\frac{P(y_1)}{P(y_2)}= \frac{P({x_{1}}~|~y_{1}) \times \ldots \times P({x_{d}}~|~y_{1})P(y_{1})}{P({x_{1}}~|~y_{2}) \times \ldots \times P({x_{d}}~|~y_{2})P(y_{2})} $$

All we need now is to compute $P(x_{1}|y_{i}), \ldots, P(x_{d}~|~y_i)$ and $P(y_{i})$ for each label by plugging in the numbers we got during training. The label with the higher posterior probabilities is the one that is selected.

4.1 Llama Breed Problem using Naive Bayes [5pts] [W]¶

Above are images of two different breeds of llamas – the Suri and the Wooly. The difference between these two breeds is subtle, as these two breeds are often mixed up. However the Suri Llama is vastly more valuable than the Wooly llama. You devise a way to determine with some confidence, which is which – without the need for expensive genetic testing.

You look at four key features of the llama: {curly hair, over 14 inch tail, over 400 pounds, extremely shy}.

You only have 6 randomly chosen llamas to work with, and their breed as the ground truth. You record the data as vectors with the entry 1 if true and 0 if false. For example a llama with vector {1,1,0,1} would have curly hair, a tail over 14 inches, be less than 400 pounds, and be extremely shy.

The Suri Llamas yield the following data: {1, 0, 1, 0}, {1, 1, 0, 1}, {1, 1, 1, 1}, {0, 0, 0, 1}

The Wooly Llamas yield the following data: {0, 0, 1, 0}, {1, 1, 0, 0}.

Now is the time to test your method!

You see a new llama you are interested in that has curly hair, does not have a tail over 14 inches, is less than 400 pounds, and is not shy.

Using Naive Bayes, is this a Suri or a Wooly Llama?

NOTE: We expect students to show their work (Naive Bayes calculations) and not just the final answer.

Answer: Let's calculate feature probabilities given labels from training data. Let S = Suri, W = Wooly, c = curly hair, t = long tail, h = over 400lb, s = extremely shy. $$P(c|S) = \frac{3}{4}; P(t|S) = \frac{1}{2}; P(h|S) = \frac{1}{2}; P(s|S) = \frac{3}{4} $$ $$P(c|W) = \frac{1}{2}; P(t|W) = \frac{1}{2}; P(h|W) = \frac{1}{2}; P(s|W) = 0 $$

Let's call the given label {1, 0, 0, 0} = l. Calculating the probability for each breed given the label: $$P(l|S) = P(S) \cdot \big( P(c|S)\cdot(1-P(t|S))\cdot(1- P(h|S))\cdot(1-P(s|S)) \big) = 0.03125$$ $$P(l|W) = P(W) \cdot \big( P(c|W)\cdot(1-P(t|W))\cdot(1- P(h|W))\cdot(1-P(s|W)) \big) = 0.04163$$

Total probability of that label: $P(l) = P(S|l)+P(W|l) = 0.07288$

Now using Naive Bayes, we can calculate the posterior for each breed: $$P(S|l) = \frac{P(l|S)}{P(l)} = 0.4288$$ $$P(W|l) = \frac{P(l|W)}{P(l)} = 0.5712$$

Therefore, that individual is most likely a wooly llama.

4.2 News Data Sentiment Classification via Logistic Regression [30pts] **[P]**¶

This dataset contains the sentiments for financial news headlines from the perspective of a retail investor. The sentiment of news has 3 classes, negative, positive and neutral. In this problem, we only use the negative (class label = 0) and positive (class label = 1) classes for binary logistic regression. For data preprocessing, we remove the duplicate headlines and remove the neutral class to get 1967 unique news headlines. Then we randomly split the 1967 headlines into training set and evaluation set with 8:2 ratio. We use the training set to fit a binary logistic regression model.

The code which is provided loads the documents, preprocess the data, builds a “bag of words” representation of each document. Your task is to complete the missing portions of the code in logisticRegression.py to determine whether a news headline is negative or positive.

In logistic_regression.py file, complete the following functions:

  • sigmoid: transform $s = x\theta$ to probability of being positive using sigmoid function, which is $\frac{1}{1+e^{-s}}$.
  • bias_augment: augment $x$ with 1's to account for bias term in $\theta$
  • predict_probs: predicts the probability of positive label $P(y = 1 | x)$
  • predict_labels: predicts labels
  • loss: calculates binary cross-entropy loss
  • gradient: calculate the gradient of the loss function with respect to the parameters $\theta$.
  • accuracy: calculate the accuracy of predictions
  • evaluate: gives loss and accuracy for a given set of points
  • fit: fit the logistic regression model on the training data.
Logistic Regression Overview:¶
  1. In logistic regression, we model the conditional probability using parameters $\theta$, which includes a bias term b. $$p(y_i=1\, |\, x_i;\theta )\, =\, {h}_{\theta }(x_i) = {\sigma}(x\theta)$$ $$p(y_i=0\, |\, x_i;\theta )\, =\, {1-h}_{\theta }(x_i) $$

where $\sigma(\cdot)$ is the sigmoid function as follows: $$\sigma(s) = \frac{1}{1+e^{-s}}$$

  1. The conditional probabilities of the positive class $(y=1)$ and the negative class $(y=0)$ of the sample $x_i$ attributes are combined into one equation as follows:
$$ p(y_i\, |\, x_i;\theta )\, =\, {({h}_{\theta }(x_i))}^{y_i}\, {(1-{h}_{\theta }(x_i))}^{1-y_i} $$
  1. Assuming that the samples are independent of each other, the likelihood of the entire dataset is the product of the probabilities of all samples. We use maximum likehood estimation to estimate the model parameters $\theta$. The negative log likelihood (scaled by the dataset size $N$) is given by: $$ \mathcal{L}(\theta \mid X, Y)\, \, =-\frac{1}{N}\, \sum ^{N}_{i=1} {{y}_i}\log{h_\theta(x_i)}+\, {(1-{y}_i)}\log(1-h_\theta({x}_i)) $$

where:

$N =$ number of training samples
$x_i =$ bag of words features of the i-th training sample
$y_i =$ label of the i-th training sample

Note that this will be our model's loss function

  1. Then calculate the gradient $\triangledown_\theta\mathcal{L}$ and use gradient descent to optimize the loss function: $$\theta_{t+1} = \theta_{t} - \eta \cdot \triangledown_\theta\mathcal{L}(\theta_t \mid X, Y)$$

where $\eta$ is the learning rate and the gradient $\triangledown_\theta\mathcal{L}$ is given by:

$$ \triangledown_\theta \mathcal{L}(\theta \mid X, Y) = \frac{1}{N} \sum_{i=1}^{N} x_{i}^{\top} \left( h_{\theta}(x_i) - y_i \right) $$

4.2.1 Local Tests for Logistic Regression [No Points]¶

You may test your implementation of the functions contained in logistic_regression.py in the cell below. Feel free to comment out tests for functions that have not been completed yet. See Using the Local Tests for more details.

In [ ]:
from utilities.localtests import TestLogisticRegression

unittest_lr = TestLogisticRegression()
unittest_lr.test_sigmoid()
unittest_lr.test_bias_augment()
unittest_lr.test_loss()
unittest_lr.test_predict_probs()
unittest_lr.test_predict_labels()
unittest_lr.test_gradient()
unittest_lr.test_accuracy()
unittest_lr.test_evaluate()
unittest_lr.test_fit()
UnitTest passed successfully for "Logistic Regression sigmoid"!
UnitTest passed successfully for "Logistic Regression bias_augment"!
UnitTest passed successfully for "Logistic Regression loss"!
UnitTest passed successfully for "Logistic Regression predict_probs"!
UnitTest passed successfully for "Logistic Regression predict_labels"!
UnitTest passed successfully for "Logistic Regression gradient"!
UnitTest passed successfully for "Logistic Regression accuracy"!
UnitTest passed successfully for "Logistic Regression evaluate"!
UnitTest passed successfully for "Logistic Regression fit"!

4.2.2 Logistic Regression Model Training [No Points]¶

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

from logistic_regression import LogisticRegression as LogReg
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

news_data = pd.read_csv("./data/news-data.csv",
                    encoding='cp437', header=None)

class_to_label_mappings = {
    "negative": 0,
    "positive": 1
}

label_to_class_mappings = {
    0 : "negative",
    1 : "positive"
}

news_data.columns = ["Sentiment", "News"]
news_data.drop_duplicates(inplace=True)

news_data = news_data[news_data.Sentiment != "neutral"]

news_data["Sentiment"] = news_data["Sentiment"].map(
    class_to_label_mappings)

vectorizer = text.CountVectorizer(stop_words='english')

X = news_data['News'].values
y = news_data['Sentiment'].values.reshape(-1, 1)

RANDOM_SEED = 5
BOW = vectorizer.fit_transform(X).toarray()
indices = np.arange(len(news_data))
X_train, X_test, y_train, y_test, indices_train, indices_test = train_test_split(
    BOW, y, indices, test_size=0.2, random_state=RANDOM_SEED)

Fit the model to the training data. Feel free to try different learning rates lr and number of epochs to achieve >80% test accuracy.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

model = LogReg()
lr = 0.05
epochs = 10000
theta = model.fit(X_train, y_train, X_test, y_test, lr, epochs)
Epoch 10000:
	train loss: 0.69	train acc: 0.7
	val loss:   0.691	val acc:   0.665
Epoch 10000:
	train loss: 0.585	train acc: 0.701
	val loss:   0.619	val acc:   0.665
Epoch 10000:
	train loss: 0.552	train acc: 0.723
	val loss:   0.604	val acc:   0.675
Epoch 10000:
	train loss: 0.53	train acc: 0.734
	val loss:   0.593	val acc:   0.678
Epoch 10000:
	train loss: 0.512	train acc: 0.743
	val loss:   0.583	val acc:   0.675
Epoch 10000:
	train loss: 0.496	train acc: 0.751
	val loss:   0.573	val acc:   0.68
Epoch 10000:
	train loss: 0.482	train acc: 0.756
	val loss:   0.564	val acc:   0.683
Epoch 10000:
	train loss: 0.469	train acc: 0.767
	val loss:   0.555	val acc:   0.685
Epoch 10000:
	train loss: 0.457	train acc: 0.778
	val loss:   0.547	val acc:   0.688
Epoch 10000:
	train loss: 0.446	train acc: 0.785
	val loss:   0.539	val acc:   0.698
Epoch 10000:
	train loss: 0.436	train acc: 0.794
	val loss:   0.532	val acc:   0.701
Epoch 10000:
	train loss: 0.427	train acc: 0.802
	val loss:   0.526	val acc:   0.708
Epoch 10000:
	train loss: 0.418	train acc: 0.805
	val loss:   0.52	val acc:   0.713
Epoch 10000:
	train loss: 0.41	train acc: 0.811
	val loss:   0.514	val acc:   0.718
Epoch 10000:
	train loss: 0.402	train acc: 0.818
	val loss:   0.509	val acc:   0.723
Epoch 10000:
	train loss: 0.395	train acc: 0.826
	val loss:   0.504	val acc:   0.728
Epoch 10000:
	train loss: 0.388	train acc: 0.831
	val loss:   0.5	val acc:   0.736
Epoch 10000:
	train loss: 0.381	train acc: 0.836
	val loss:   0.495	val acc:   0.739
Epoch 10000:
	train loss: 0.375	train acc: 0.838
	val loss:   0.491	val acc:   0.739
Epoch 10000:
	train loss: 0.369	train acc: 0.84
	val loss:   0.488	val acc:   0.744
Epoch 10000:
	train loss: 0.364	train acc: 0.846
	val loss:   0.484	val acc:   0.746
Epoch 10000:
	train loss: 0.358	train acc: 0.848
	val loss:   0.48	val acc:   0.751
Epoch 10000:
	train loss: 0.353	train acc: 0.853
	val loss:   0.477	val acc:   0.751
Epoch 10000:
	train loss: 0.348	train acc: 0.856
	val loss:   0.474	val acc:   0.754
Epoch 10000:
	train loss: 0.343	train acc: 0.86
	val loss:   0.471	val acc:   0.756
Epoch 10000:
	train loss: 0.339	train acc: 0.862
	val loss:   0.468	val acc:   0.754
Epoch 10000:
	train loss: 0.335	train acc: 0.866
	val loss:   0.466	val acc:   0.756
Epoch 10000:
	train loss: 0.33	train acc: 0.868
	val loss:   0.463	val acc:   0.761
Epoch 10000:
	train loss: 0.326	train acc: 0.868
	val loss:   0.461	val acc:   0.761
Epoch 10000:
	train loss: 0.322	train acc: 0.87
	val loss:   0.458	val acc:   0.761
Epoch 10000:
	train loss: 0.318	train acc: 0.873
	val loss:   0.456	val acc:   0.761
Epoch 10000:
	train loss: 0.315	train acc: 0.877
	val loss:   0.454	val acc:   0.761
Epoch 10000:
	train loss: 0.311	train acc: 0.879
	val loss:   0.452	val acc:   0.761
Epoch 10000:
	train loss: 0.308	train acc: 0.882
	val loss:   0.45	val acc:   0.761
Epoch 10000:
	train loss: 0.304	train acc: 0.884
	val loss:   0.448	val acc:   0.764
Epoch 10000:
	train loss: 0.301	train acc: 0.884
	val loss:   0.446	val acc:   0.761
Epoch 10000:
	train loss: 0.298	train acc: 0.886
	val loss:   0.444	val acc:   0.769
Epoch 10000:
	train loss: 0.295	train acc: 0.889
	val loss:   0.443	val acc:   0.769
Epoch 10000:
	train loss: 0.292	train acc: 0.891
	val loss:   0.441	val acc:   0.766
Epoch 10000:
	train loss: 0.289	train acc: 0.894
	val loss:   0.44	val acc:   0.769
Epoch 10000:
	train loss: 0.286	train acc: 0.896
	val loss:   0.438	val acc:   0.772
Epoch 10000:
	train loss: 0.284	train acc: 0.899
	val loss:   0.437	val acc:   0.774
Epoch 10000:
	train loss: 0.281	train acc: 0.9
	val loss:   0.435	val acc:   0.774
Epoch 10000:
	train loss: 0.278	train acc: 0.902
	val loss:   0.434	val acc:   0.774
Epoch 10000:
	train loss: 0.276	train acc: 0.906
	val loss:   0.433	val acc:   0.774
Epoch 10000:
	train loss: 0.273	train acc: 0.907
	val loss:   0.431	val acc:   0.774
Epoch 10000:
	train loss: 0.271	train acc: 0.908
	val loss:   0.43	val acc:   0.779
Epoch 10000:
	train loss: 0.269	train acc: 0.909
	val loss:   0.429	val acc:   0.779
Epoch 10000:
	train loss: 0.266	train acc: 0.909
	val loss:   0.428	val acc:   0.779
Epoch 10000:
	train loss: 0.264	train acc: 0.91
	val loss:   0.427	val acc:   0.782
Epoch 10000:
	train loss: 0.262	train acc: 0.914
	val loss:   0.425	val acc:   0.782
Epoch 10000:
	train loss: 0.26	train acc: 0.914
	val loss:   0.424	val acc:   0.784
Epoch 10000:
	train loss: 0.258	train acc: 0.914
	val loss:   0.423	val acc:   0.784
Epoch 10000:
	train loss: 0.255	train acc: 0.915
	val loss:   0.422	val acc:   0.784
Epoch 10000:
	train loss: 0.253	train acc: 0.918
	val loss:   0.421	val acc:   0.784
Epoch 10000:
	train loss: 0.251	train acc: 0.92
	val loss:   0.421	val acc:   0.784
Epoch 10000:
	train loss: 0.25	train acc: 0.921
	val loss:   0.42	val acc:   0.787
Epoch 10000:
	train loss: 0.248	train acc: 0.922
	val loss:   0.419	val acc:   0.789
Epoch 10000:
	train loss: 0.246	train acc: 0.923
	val loss:   0.418	val acc:   0.789
Epoch 10000:
	train loss: 0.244	train acc: 0.924
	val loss:   0.417	val acc:   0.789
Epoch 10000:
	train loss: 0.242	train acc: 0.926
	val loss:   0.416	val acc:   0.789
Epoch 10000:
	train loss: 0.24	train acc: 0.927
	val loss:   0.416	val acc:   0.789
Epoch 10000:
	train loss: 0.239	train acc: 0.927
	val loss:   0.415	val acc:   0.792
Epoch 10000:
	train loss: 0.237	train acc: 0.928
	val loss:   0.414	val acc:   0.792
Epoch 10000:
	train loss: 0.235	train acc: 0.928
	val loss:   0.413	val acc:   0.794
Epoch 10000:
	train loss: 0.234	train acc: 0.928
	val loss:   0.413	val acc:   0.794
Epoch 10000:
	train loss: 0.232	train acc: 0.929
	val loss:   0.412	val acc:   0.794
Epoch 10000:
	train loss: 0.23	train acc: 0.929
	val loss:   0.411	val acc:   0.797
Epoch 10000:
	train loss: 0.229	train acc: 0.93
	val loss:   0.411	val acc:   0.797
Epoch 10000:
	train loss: 0.227	train acc: 0.931
	val loss:   0.41	val acc:   0.797
Epoch 10000:
	train loss: 0.226	train acc: 0.933
	val loss:   0.409	val acc:   0.797
Epoch 10000:
	train loss: 0.224	train acc: 0.934
	val loss:   0.409	val acc:   0.797
Epoch 10000:
	train loss: 0.223	train acc: 0.936
	val loss:   0.408	val acc:   0.797
Epoch 10000:
	train loss: 0.222	train acc: 0.936
	val loss:   0.408	val acc:   0.802
Epoch 10000:
	train loss: 0.22	train acc: 0.938
	val loss:   0.407	val acc:   0.802
Epoch 10000:
	train loss: 0.219	train acc: 0.939
	val loss:   0.407	val acc:   0.802
Epoch 10000:
	train loss: 0.217	train acc: 0.94
	val loss:   0.406	val acc:   0.802
Epoch 10000:
	train loss: 0.216	train acc: 0.941
	val loss:   0.406	val acc:   0.802
Epoch 10000:
	train loss: 0.215	train acc: 0.942
	val loss:   0.405	val acc:   0.802
Epoch 10000:
	train loss: 0.214	train acc: 0.943
	val loss:   0.405	val acc:   0.802
Epoch 10000:
	train loss: 0.212	train acc: 0.943
	val loss:   0.404	val acc:   0.802
Epoch 10000:
	train loss: 0.211	train acc: 0.943
	val loss:   0.404	val acc:   0.802
Epoch 10000:
	train loss: 0.21	train acc: 0.945
	val loss:   0.403	val acc:   0.802
Epoch 10000:
	train loss: 0.209	train acc: 0.945
	val loss:   0.403	val acc:   0.802
Epoch 10000:
	train loss: 0.207	train acc: 0.945
	val loss:   0.402	val acc:   0.802
Epoch 10000:
	train loss: 0.206	train acc: 0.947
	val loss:   0.402	val acc:   0.802
Epoch 10000:
	train loss: 0.205	train acc: 0.947
	val loss:   0.401	val acc:   0.799
Epoch 10000:
	train loss: 0.204	train acc: 0.948
	val loss:   0.401	val acc:   0.797
Epoch 10000:
	train loss: 0.203	train acc: 0.949
	val loss:   0.401	val acc:   0.797
Epoch 10000:
	train loss: 0.202	train acc: 0.95
	val loss:   0.4	val acc:   0.797
Epoch 10000:
	train loss: 0.2	train acc: 0.95
	val loss:   0.4	val acc:   0.799
Epoch 10000:
	train loss: 0.199	train acc: 0.951
	val loss:   0.4	val acc:   0.799
Epoch 10000:
	train loss: 0.198	train acc: 0.951
	val loss:   0.399	val acc:   0.802
Epoch 10000:
	train loss: 0.197	train acc: 0.951
	val loss:   0.399	val acc:   0.802
Epoch 10000:
	train loss: 0.196	train acc: 0.952
	val loss:   0.399	val acc:   0.802
Epoch 10000:
	train loss: 0.195	train acc: 0.952
	val loss:   0.398	val acc:   0.802
Epoch 10000:
	train loss: 0.194	train acc: 0.952
	val loss:   0.398	val acc:   0.802
Epoch 10000:
	train loss: 0.193	train acc: 0.952
	val loss:   0.398	val acc:   0.805
Epoch 10000:
	train loss: 0.192	train acc: 0.952
	val loss:   0.397	val acc:   0.805
Epoch 10000:
	train loss: 0.191	train acc: 0.952
	val loss:   0.397	val acc:   0.807

4.2.3 Logistic Regression Model Evaluation [No Points]¶

Evaluate the model on the test dataset

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

test_loss, test_acc = model.evaluate(X_test, y_test, theta)
print(f"Test Dataset Accuracy: {round(test_acc, 3)}")
Test Dataset Accuracy: 0.807

Plotting the loss function on the training data and the test data for every 100th epoch

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

model.plot_loss()

Plotting the accuracy function on the training data and the test data for each epoch

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

model.plot_accuracy()
In [ ]:
np.reshape(X_test[0], (1, X_test.shape[1])).shape
Out[ ]:
(1, 5286)

Check out sample evaluations from the test set.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

num_samples = 10
for i in range(10):
    rand_index = np.random.randint(0, len(X_test))
    x_test = np.reshape(X_test[rand_index], (1, X_test.shape[1]))
    prob = model.predict_probs(model.bias_augment(x_test), theta)
    pred = model.predict_labels(prob)
    print(f"Input News: {X[indices_test[rand_index]]}\n")
    print(f"Predicted Sentiment: {label_to_class_mappings[pred[0][0]]}")
    print(f"Actual Sentiment: {label_to_class_mappings[y_test[rand_index][0]]}\n")
Input News: Finnish Larox has signed a contract with the Talvivaara Project for the delivery of filters to the Talvivaara nickel mine in Sotkamo , in Finland .

Predicted Sentiment: positive
Actual Sentiment: positive

Input News: Finnish Bank of +_land reports operating profit of EUR 2.2 mn in the first quarter of 2010 , down from EUR 6.3 mn in the corresponding period in 2009 .

Predicted Sentiment: positive
Actual Sentiment: negative

Input News: After the reporting period , BioTie North American licensing partner Somaxon Pharmaceuticals announced positive results with nalmefene in a pilot Phase 2 clinical trial for smoking cessation .

Predicted Sentiment: positive
Actual Sentiment: positive

Input News: The move is aimed at boosting sales , cost-efficiency and market share in Finland .

Predicted Sentiment: positive
Actual Sentiment: positive

Input News: The long-term contract is global .

Predicted Sentiment: positive
Actual Sentiment: positive

Input News: Sales in Finland rose by 3.9 % and international growth was 0.7 % .

Predicted Sentiment: positive
Actual Sentiment: positive

Input News: Ramirent made 18 million kroons EUR 1.15 mln loss last year ; the year before the company was 7.3 million kroons in the black .

Predicted Sentiment: positive
Actual Sentiment: negative

Input News: In Finland , insurance company Pohjola and the Finnish motorcyclist association have signed an agreement with the aim of improving motorcyclists ' traffic safety .

Predicted Sentiment: positive
Actual Sentiment: positive

Input News: down to EUR5 .9 m H1 '09 3 August 2009 - Finnish media group Ilkka-Yhtyma Oyj ( HEL : ILK2S ) said today its net profit fell 45 % on the year to EUR5 .9 m in the first half of 2009 .

Predicted Sentiment: negative
Actual Sentiment: negative

Input News: When the situation normalises , the company will be able to increase the amount of residential units for sale in St Petersburg and Moscow , in particular .

Predicted Sentiment: positive
Actual Sentiment: positive

Q5: Noise in PCA and Linear Regression [15pts] **[W]**¶

Both PCA and least squares regression can be viewed as algorithms for inferring (linear) relationships among data variables. In this part of the assignment, you will develop some intuition for the differences between these two approaches and develop an understanding of the settings that are better suited to using PCA or better suited to using the least squares fit.

The high level bit is that PCA is useful when there is a set of latent (hidden/underlying) variables, and all the coordinates of your data are linear combinations (plus noise) of those variables. The least squares fit is useful when you have direct access to the independent variables, so any noisy coordinates are linear combinations (plus noise) of known variables.

5.1 Slope Functions [5 pts] **[W]**¶

In the following cell, complete the following:

  1. pca_slope: For this function, assume that $X$ is the first feature and $y$ is the second feature for the data. Write a function, that takes in the first feature vector $X$ and the second feature vector $y$. Stack these two feature vectors into a single N x 2 matrix and use this to determine the first principal component vector of this dataset. Be careful of how you are stacking the two vectors. You can check the output by printing it which should help you debug. Finally, return the slope of this first component. You should use the PCA implementation from Q2.

  2. lr_slope: Write a function that takes $X$ and $y$ and returns the slope of the least squares fit. You should use the Linear Regression implementation from Q3 but do not use any kind of regularization. Think about how weight could relate to slope.

In later subparts, we consider the case where our data consists of noisy measurements of $x$ and $y$. For each part, we will evaluate the quality of the relationship recovered by PCA, and that recovered by standard least squares regression.

As a reminder, least squares regression minimizes the squared error of the dependent variable from its prediction. Namely, given $(x_i, y_i)$ pairs, least squares returns the line $l(x)$ that minimizes $\sum_i (y_i − l(x_i))^2$.

In [ ]:
import numpy as np
from pca import PCA
from regression import Regression

def pca_slope(X, y):
    """
    Calculates the slope of the first principal component given by PCA

    Args: 
        x: N x 1 array of feature x
        y: N x 1 array of feature y
    Return:
        slope: (float) scalar slope of the first principal component
    """

    data = np.array((np.reshape(X, np.size(X)), np.reshape(y, np.size(y)))).T
    # print((np.reshape(X, np.size(X))).shape)
    # print((np.reshape(y, np.size(y))).shape)
    # print(data.shape)
    pca = PCA()
    pca.fit(data)
    slope = pca.V[0, -1]/pca.V[0, 0]
    return slope

def lr_slope(X, y):
    """
    Calculates the slope of the best fit returned by linear_fit_closed()

    For this function don't use any regularization

    Args: 
        X: N x 1 array corresponding to a dataset
        y: N x 1 array of labels y
    Return:
        slope: (float) slope of the best fit
    """

    lr = Regression()
    w = lr.linear_fit_closed(X, y)
    slope = w[0,0]
    return slope

We will consider a simple example with two variables, $x$ and $y$, where the true relationship between the variables is $y = 4x$. Our goal is to recover this relationship—namely, recover the coefficient “4”. We set $X = [0, .02, .04, .06, \ldots, 1]$ and $y = 4x$. Make sure both functions return 4.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
x = np.arange(0, 1.02, 0.02).reshape(-1, 1)

y = 4 * np.arange(0, 1.02, 0.02).reshape(-1, 1)

print("Slope of first principal component", pca_slope(x, y))

print("Slope of best linear fit", lr_slope(x, y))

fig = plt.figure()
plt.scatter(x, y)
plt.xlabel("x")
plt.ylabel("y")

if not STUDENT_VERSION:
    fig.text(0.5, 0.5, EO_TEXT, transform=fig.transFigure,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT*0.8)
    
plt.show()
Slope of first principal component 4.0
Slope of best linear fit 4.0

5.2 Analysis Setup [5 pts] **[W]**¶

Error in y¶

In this subpart, we consider the setting where our data consists of the actual values of $x$, and noisy estimates of $y$. Run the following cell to see how the data looks when there is error in $y$.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
base = np.arange(0.001, 1.001, 0.001).reshape(-1, 1)
c = 0.5
X = base
y = 4 * base + np.random.normal(loc=[0], scale=c, size=base.shape)

fig = plt.figure()
plt.scatter(X, y)
plt.xlabel("x")
plt.ylabel("y")

if not STUDENT_VERSION:
    fig.text(0.5, 0.5, EO_TEXT, transform=fig.transFigure,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)

plt.show()

In following cell, you will implement the addNoise function:

  1. Create a vector $X$ where $X = [x_1, x_2, . . . , x_{1000}] = [.001, .002, .003, . . . , 1]$.

  2. For a given noise level $c$, set $ \hat{y}_i ∼ 4x_i + \mathcal{N}(0, c) = 4i/1000 + \mathcal{N}(0, c)$, and $\hat{Y} = [\hat{y}_1, \hat{y}_2, . . . , \hat{y}_{1000}]$. You can use the np.random.normal function, where scale is equal to noise level, to add noise to your points.

  3. Notice the parameter x_noise in the addNoise function. When this parameter is set to $True$, you will have to add noise to $X$. For a given noise level c, let $\hat{x}_i ∼ x_i + \mathcal{N}(0, c) = i/1000 + \mathcal{N}(0, c)$, and $\hat{X} = [\hat{x}_1, \hat{x}_2, . . . . \hat{x}_{1000}]$

  4. Return the pca_slope and lr_slope values of this $X$ and $\hat{Y}$ dataset you have created where $\hat{Y}$ has noise ($X = X$ or $\hat{X}$ depending on the problem).

Hint 1: Refer to the above example on how to add noise to $X$ or $Y$

Hint 2: Be careful not to add double noise to $X$ or $Y$

In [ ]:
def addNoise(c, x_noise = False, seed = 1):
    """
    Creates a dataset with noise and calculates the slope of the dataset
    using the pca_slope and lr_slope functions implemented in this class.

    Args: 
        c: (float) scalar, a given noise level to be used on Y and/or X
        x_noise: (Boolean) When set to False, X should not have noise added
                 When set to True, X should have noise. 
                 Note that the noise added to X should be different from the 
                 noise added to Y. You should NOT use the same noise you add 
                 to Y here.
        seed: (int) Random seed
    Return:
        pca_slope_value: (float) slope value of dataset created using pca_slope
        lr_slope_value: (float) slope value of dataset created using lr_slope

    """
    np.random.seed(seed) #### DO NOT CHANGE THIS ####
    
    ############# START YOUR CODE BELOW #############
    
    X = np.linspace(0.001, 1, 1000)
    X = X.reshape((X.size, 1))
    yhat = 4*X + np.random.normal(0, c, size=X.shape)
    if x_noise:
        X = X + np.random.normal(0, c, size=X.shape)
    
    pca_slope_value = pca_slope(X, yhat)
    lr_slope_value = lr_slope(X, yhat)

    ############# END YOUR CODE ABOVE #############
    return pca_slope_value, lr_slope_value

A scatter plot with $c$ on the horizontal axis and the output of pca_slope and lr_slope on the vertical axis has already been implemented for you.

A sample $\hat{Y}$ has been taken for each $c$ in $[0, 0.05, 0.1, \ldots, 0.95, 1.0]$. The output of pca_slope is plotted as a red dot, and the output of lr_slope as a blue dot. This has been repeated 30 times, you can see that we end up with a plot of 1260 dots, in 21 columns of 60, half red and half blue. Note that the plot you get might not look exactly like the TA version and that is fine because you might have randomized the noise slightly differently than how we did it.

NOTE: Here, x_noise = False since we only want Y to have noise.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
pca_slope_values = []
linreg_slope_values = []
c_values = []
s_idx = 0

for i in range(30):
    for c in np.arange(0, 1.05, 0.05):
        
        # Calculate pca_slope_value (psv) and lr_slope_value (lsv)
        psv, lsv = addNoise(c, seed = s_idx)
        
        # Append pca and lr slope values to list for plot function
        pca_slope_values.append(psv)
        linreg_slope_values.append(lsv)
        
        # Append c value to list for plot function
        c_values.append(c)
        
        # Increment random seed index
        s_idx += 1

fig = plt.figure()    
plt.scatter(c_values, pca_slope_values, c='r')
plt.scatter(c_values, linreg_slope_values, c='b')
plt.xlabel("c")
plt.ylabel("slope")

if not STUDENT_VERSION:
    fig.text(0.6, 0.4, EO_TEXT, transform=fig.transFigure,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.5, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)
    
plt.show()

Error in $x$ and $y$¶

We will now examine the case where our data consists of noisy estimates of both $x$ and $y$. Run the following cell to see how the data looks when there is error in both.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
base = np.arange(0.001, 1, 0.001).reshape(-1, 1)
c = 0.5
X = base + np.random.normal(loc=[0], scale=c, size=base.shape)
y = 4 * base + np.random.normal(loc=[0], scale=c, size=base.shape)

fig = plt.figure()
plt.scatter(X, y)
plt.xlabel("x")
plt.ylabel("y")

if not STUDENT_VERSION:
    fig.text(0.5, 0.5, EO_TEXT, transform=fig.transFigure,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.8, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)
    
plt.show()

In the below cell, we graph the predicted PCA and LR slopes on the vertical axis against the value of c on the horizontal axis. Note that the graph you get might not look exactly like the TA version and that is fine because you might have randomized the noise slightly differently than how we did it.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
pca_slope_values = []
linreg_slope_values = []
c_values = []
s_idx = 0

for i in range(30):
    for c in np.arange(0, 1.05, 0.05):
        
        # Calculate pca_slope_value (psv) and lr_slope_value (lsv), notice x_noise = True
        psv, lsv = addNoise(c, x_noise = True, seed = s_idx)
        
        # Append pca and lr slope values to list for plot function
        pca_slope_values.append(psv)
        linreg_slope_values.append(lsv)
        
        # Append c value to list for plot function
        c_values.append(c)
        
        # Increment random seed index
        s_idx += 1

fig = plt.figure()
plt.scatter(c_values, pca_slope_values, c='r')
plt.scatter(c_values, linreg_slope_values, c='b')
plt.xlabel("c")
plt.ylabel("slope")

if not STUDENT_VERSION:
    fig.text(0.5, 0.5, EO_TEXT, transform=fig.transFigure,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.5, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)
    
plt.show()

5.3. Analysis [5 pts] **[W]**¶

Based on your observations from previous subsections answer the following questions about the two cases (error in $Y$ and error in both $X$ and $Y$) in 2-3 lines.

NOTE:

  1. The closer the value of slope to actual slope ("4" here) the better the algorithm is performing.
  2. You don't need to provide a mathematical proof for this question.
  3. Understanding how PCA and Linear Regression work should help you decipher which case was better for which algorithm. Base your answer on this understanding of how either algorithms works.

QUESTIONS:

  1. Which case does PCA perform worse in? Why does PCA perform worse in this case? (2 Pts)
  2. Why does PCA perform better in the other case? (1 Pt)
  3. Which case does Linear Regression perform well? Why does Linear Regression perform well in this case? (2 Pts)

ANSWERS:

  1. PCA performs worse when there is no noise in X. That is because the whole point of PCA is extracting meaningful information from features, i.e. maximizing the variance. In this case, because we have $y$ noisy while the datapoints are clean, there is change in the direction of greater variance.
  2. Because PCA is trying to maximize the direction of greater variance, having a similar type of noise in the datapoins sort of "makes up" for the noise in the features with respect to the direction of the variance.
  3. Linear regression performs well when the data is less scattered, namely when we have noise applied to $y$ only. The reason is because linear regression minimizes the RMSE, therefore the less variance there is in the dataset, the berr linear regression will perform.

Q6 Feature Reduction Implementation [25pts Bonus for All] **[P]** | **[W]**¶

6.1 Implementation [18 Points] **[P]**¶

Feature selection is an integral aspect of machine learning. It is the process of selecting a subset of relevant features that are to be used as the input for the machine learning task. Feature selection may lead to simpler models for easier interpretation, shorter training times, avoidance of the curse of dimensionality, and better generalization by reducing overfitting.

In the feature_reduction.py file, complete the following functions:

  • forward_selection
  • backward_elimination

These functions should each output a list of features.

Forward Selection:¶

In forward selection, we start with a null model, start fitting the model with one individual feature at a time, and select the feature with the minimum p-value. We continue to do this until we have a set of features where one feature's p-value is less than the confidence level.

Steps to implement it:

  1. Choose a significance level (given to you).
  2. Fit all possible simple regression models by considering one feature at a time.
  3. Select the feature with the lowest p-value.
  4. Fit all possible models with one extra feature added to the previously selected feature(s).
  5. Select the feature with the minimum p-value again. if p_value < significance, go to Step 4. Otherwise, terminate.

Backward Elimination:¶

In backward elimination, we start with a full model, and then remove the insignificant feature with the highest p-value (that is greater than the significance level). We continue to do this until we have a final set of significant features.

Steps to implement it:

  1. Choose a significance level (given to you).
  2. Fit a full model including all the features.
  3. Select the feature with the highest p-value. If (p-value > significance level), go to Step 4, otherwise terminate.
  4. Remove the feature under consideration.
  5. Fit a model without this feature. Repeat entire process from Step 3 onwards.

HINT 1: The p-value is known as the observed significance value for a null hypothesis. In our case, the p-value of a feature is associated with the hypothesis $H_{0}\colon \beta_j = 0$. If $\beta_j = 0$, then this feature contributes no predictive power to our model and should be dropped. We reject the null hypothesis if the p-value is smaller than our significance level. More briefly, a p-value is a measure of how much the given feature significantly represents an observed change. A lower p-value represents higher significance. Some more information about p-values can be found here: https://towardsdatascience.com/what-is-a-p-value-b9e6c207247f

HINT 2: For this function, you will have to install statsmodels if not installed already. To do this, run pip install statsmodels in command line/terminal. In the case that you are using an Anaconda environment, run conda install -c conda-forge statsmodels in the command line/terminal. For more information about installation, refer to https://www.statsmodels.org/stable/install.html. The statsmodels library is a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration. You will have to use this library to choose a regression model to fit your data against. Some more information about this module can be found here: https://www.statsmodels.org/stable/index.html

HINT 3: For step 2 in each of the forward and backward selection functions, you can use the sm.OLS function as your regression model. Also, do not forget to add a bias to your regression model. A function that may help you is the sm.add_constants function.

TIP 4: You should be able to implement these function using only the libraries provided in the cell below.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

from feature_reduction import FeatureReduction
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
bc_dataset = load_breast_cancer()
bc = pd.DataFrame(bc_dataset.data, columns = bc_dataset.feature_names)
bc['Diagnosis'] = bc_dataset.target
X = bc.drop('Diagnosis', 1)
y = bc['Diagnosis']
featureselection = FeatureReduction()
#Run the functions to make sure two lists are generated, one for each method
print("Features selected by forward selection:", FeatureReduction.forward_selection(X, y))
print("Features selected by backward elimination:", FeatureReduction.backward_elimination(X, y))

6.2 Feature Selection - Discussion [7pts] **[W]**¶

Question 6.2.1:¶

We have seen two regression methods namely Lasso and Ridge regression earlier in this assignment. Another extremely important and common use-case of these methods is to perform feature selection. Considering there are no restrictions set on the dataset, according to you, which of these two methods is more appropriate for feature selection generally (choose one method)? Why? (3 pts)

Answer: ...

Question 6.2.2:¶

We have seen that we use different subsets of features to get different regression models. These models depend on the relevant features that we have selected. Using forward selection, what fraction of the total possible models can we explore? Assume that the total number of features that we have at our disposal is $N$. Remember that in stepwise feature selection (like forward selection and backward elimination), we always include an intercept in our model, so you only need to consider the $N$ features. (4 pts)

Answer: ...

Q7: Netflix Movie Recommendation Problem Solved using SVD [10pts Bonus for All] **[P]**¶

Let us try to tackle the famous problem of movie recommendation using just our SVD functions that we have implemented. We are given a table of reviews that 600+ users have provided for close to 10,000 different movies. Our challenge is to predict how much a user would rate a movie that they have not seen (or rated) yet. Once we have these ratings, we would then be able to predict which movies to recommend to that user.

Understanding How SVD Helps in Movie Recommendation¶

We are given a dataset of user-movie ratings ($R$) that looks like the following:

Ratings in the matrix range from 1-5. In addition, the matrix contains nan wherever there is no rating provided by the user for the corresponding movie. One simple way to utilize this matrix to predict movie ratings for a given user-movie pair would be to fill in each row / column with the average rating for that row / column. For example: For each movie, if any rating is missing, we could just fill in the average value of all available ratings and expect this to be around the actual / expected rating.

While this may sound like a good approximation, it turns out that by just using SVD we can improve the accuracy of the predicted rating.

How does SVD fit into this picture?¶

Recall how we previously used SVD to compress images by throwing out less important information. We could apply the same idea to our above matrix ($R$) to generate another matrix ($R\_$) which will provide the same information, i.e ratings for any user-movie pairs but by combining only the most important features.

Let's look at this with an example:

Assume that decomposition of matrix $R$ looks like:

$$ R = U\Sigma V^{T} $$

We can re-write this decomposition as follows:

$$ R = U\sqrt\Sigma \sqrt\Sigma V^{T} $$

If we were to take only the top K singular values from this matrix, we could again write this as:

$$ R\_ = U\sqrt\Sigma_k \sqrt\Sigma_k V^{T} $$

Thus we have now effectively separated our ratings matrix $R$ into two matrices given by: $ U_k = U_{[:k]}\sqrt\Sigma_k $ and $ V_k = \sqrt\Sigma_k V_{[:k]}^{T} $

There are many ways to visualize the importance of $U$ and $V$ matrices but with respect to our context of movie ratings, we can visualize these matrices as follows:

We can imagine each row of $U_k$ to be holding some information how much each user likes a particular feature (feature1, feature2, feature 3...feature $k$). On the contrary, we can imagine each column of $V_k^{T}$ to be holding some information about how much each movie relates to the given features (feature 1, feature 2, feature 3 ... feature $k$).

Lets denote the row of $U_k$ by $u_i$ and the column of $V_k^{T}$ by $m_j$. Then the dot-product: $u_i \cdot m_j$ can provide us with information on how much a user i likes movie j.

What have we achieved by doing this?¶

Starting with a matrix $R$ containing very few ratings, we have been able to summarize the sparse matrix of ratings into matrices $U_k$ and $V_k$ which each contain feature vectors about the Users and the Movies. Since these feature vectors are summarized from only the most important K features (by our SVD), we can predict any User-Movie rating that is closer to the actual value than just taking any average rating of a row / column (recall our brute force solution discussed above).

Now this method in practice is still not close to the state-of-the-art but for a naive and simple method we have used, we can still build some powerful visualizations as we will see in part 3.

We have divided the task into 3 parts:

1) Implement recommender_svd to return matrices $U_k$ and $V_k$

2) Implement predict to predict top 3 movies a given user would watch

3) (Ungraded) Feel free to run the final cell labeled to see some visualizations of the feature vectors you have generated

Hint: Movie IDs are IDs assigned to the movies in the dataset and can be greater than the number of movies. This is why we have given movies_index and users_index as well that map between the movie IDs and the indices in the ratings matrix. Please make sure to use this as well.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

from svd_recommender import SVDRecommender
from regression import Regression
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

recommender = SVDRecommender()
recommender.load_movie_data()
regression = Regression()
# Read the data into the respective train and test dataframes
train, test = recommender.load_ratings_datasets()
print("---------------------------------------------")
print("Train Dataset Stats:")
print("Shape of train dataset: {}".format(train.shape))
print("Number of unique users (train): {}".format(train['userId'].unique().shape[0]))
print("Number of unique users (train): {}".format(train['movieId'].unique().shape[0]))
print("Sample of Train Dataset:")
print("------------------------------------------")
print(train.head())
print("------------------------------------------")
print("Test Dataset Stats:")
print("Shape of test dataset: {}".format(test.shape))
print("Number of unique users (test): {}".format(test['userId'].unique().shape[0]))
print("Number of unique users (test): {}".format(test['movieId'].unique().shape[0]))
print("Sample of Test Dataset:")
print("------------------------------------------")
print(test.head())
print("------------------------------------------")

# We will first convert our dataframe into a matrix of Ratings: R
# R[i][j] will indicate rating for movie:(j) provided by user:(i)
# users_index, movies_index will store the mapping between array indices and actual userId / movieId
R, users_index, movies_index = recommender.create_ratings_matrix(train)
print("Shape of Ratings Matrix (R): {}".format(R.shape))

# Replacing `nan` with average rating given for the movie by all users
# Additionally, zero-centering the array to perform SVD
mask = np.isnan(R)
masked_array = np.ma.masked_array(R, mask)
r_means = np.array(np.mean(masked_array, axis=0))
R_filled = masked_array.filled(r_means)
R_filled = R_filled - r_means

7.1.1 Implement the recommender_svd method to use SVD for Recommendation [5pts] **[P]**¶

In svd_recommender.py file, complete the following function:

  • recommender_svd: Use the above equations to output $U_k$ and $V_k$. You can utilize the svd and compress methods from imgcompression.py to retrieve your $U$, $\Sigma$ and $V$ matrices.

Local Test for recommender_svd Function [No Points]¶

You may test your implementation of the function in the cell below. See Using the Local Tests for more details.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

from utilities.localtests import TestSVDRecommender

unittest_svd_rec = TestSVDRecommender()
unittest_svd_rec.test_recommender_svd()
In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

# Implement the method `recommender_svd` and run it for the following values of features
no_of_features = [2, 3, 8, 15, 18, 25, 30]
test_errors = []

for k in no_of_features:
    U_k, V_k = recommender.recommender_svd(R_filled, k)
    pred = [] # to store the predicted ratings
    for _, row in test.iterrows():
        user = row['userId']
        movie = row['movieId']
        u_index = users_index[user]
        # If we have a prediction for this movie, use that
        if movie in movies_index:
            m_index = movies_index[movie]
            pred_rating = np.dot(U_k[u_index, :], V_k[:,m_index]) + r_means[m_index]
        # Else, use an average of the users ratings
        else:
            pred_rating = np.mean(np.dot(U_k[u_index], V_k)) + r_means[m_index]
        pred.append(pred_rating)
    test_error = regression.rmse(test['rating'], pred)
    test_errors.append(test_error)
    print("RMSE for k = {} --> {}".format(k, test_error))

Plot the Test Error over the different values of k

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################
fig = plt.figure()
plt.plot(no_of_features, test_errors, 'bo')
plt.plot(no_of_features, test_errors)
plt.xlabel("Value for k")
plt.ylabel("RMSE on Test Dataset")
plt.title("SVD Recommendation Test Error with Different k values")

if not STUDENT_VERSION:
    fig.text(0.5, 0.5, EO_TEXT, transform=fig.transFigure,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.5, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)
    
plt.show()

7.1.2 Implement the predict method to find which movie a user is interested in watching next [5pts] **[P]**¶

Our goal here is to predict movies that a user would be interested in watching next. Since our dataset contains a large list of movies and our model is very naive, filtering among this huge set for top 3 movies can produce results that we may not correlate immediately. Therefore, we'll restrict this prediction to only movies among a subset as given by movies_pool.

Let us consider a user (ID: 660) who has already watched and rated well (>3) on the following movies:

  • Iron Man (2008)
  • Thor: The Dark World (2013)
  • Avengers, The (2012)

The following cell tries to predict which among the movies given by the list below, the user would be most interested in watching next:
movies_pool:

  • Ant-Man (2015)
  • Iron Man 2 (2010)
  • Avengers: Age of Ultron (2015)
  • Thor (2011)
  • Captain America: The First Avenger (2011)
  • Man of Steel (2013)
  • Star Wars: Episode IV - A New Hope (1977)
  • Ladybird Ladybird (1994)
  • Man of the House (1995)
  • Jungle Book, The (1994)

In svd_recommender.py file, complete the following function:

  • predict: Predict the next 3 movies that the user would be most interested in watching among the ones above.

HINT: You can use the method get_movie_id_by_name to convert movie names into movie IDs and vice-versa.

NOTE: The user may have already watched and rated some of the movies in movies_pool. Remember to filter these out before returning the output. The original Ratings Matrix, $R$ might come in handy here along with np.isnan

Local Test for predict Functions [No Points]¶

You may test your implementation of the function in the cell below. See Using the Local Tests for more details.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

unittest_svd_rec.test_predict()

7.2 Visualize Movie Vectors [No Points]¶

Our model is still a very naive model, but it can still be used for some powerful analysis such as clustering similar movies together based on user's ratings.

We have said that our matrix $V_k$ that we have generated above contains information about movies. That is, each column in $V_k$ contains (feature 1, feature 2, .... feature $k$) for each movie. We can also say this in other terms that $V_k$ gives us a feature vector (of length k) for each movie that we can visualize in a $k$-dimensional space. For example, using this feature vector, we can find out which movies are similar or vary.

While we would love to visualize a $k$-dimensional space, the constraints of our 2D screen wouldn't really allow us to do so. Instead let us set $K=2$ and try to plot the feature vectors for just a couple of these movies.

As a fun activity run the following cell to visualize how our model separates the two sets of movies given below.

NOTE: There are 2 possible visualizations. Your plot could be the one that's given on the expected PDF or the one where the y-coordinates are inverted.

In [ ]:
###############################
### DO NOT CHANGE THIS CELL ###
###############################

marvel_movies = ['Thor: The Dark World (2013)',
                'Avengers: Age of Ultron (2015)',
                'Ant-Man (2015)',
                'Iron Man 2 (2010)',
                'Avengers, The (2012)',
                'Thor (2011)',
                'Captain America: The First Avenger (2011)']
marvel_labels = ['Blue'] * len(marvel_movies)
star_wars_movies = [
                'Star Wars: Episode IV - A New Hope (1977)',
                'Star Wars: Episode V - The Empire Strikes Back (1980)',
                'Star Wars: Episode VI - Return of the Jedi (1983)',
                'Star Wars: Episode I - The Phantom Menace (1999)',
                'Star Wars: Episode II - Attack of the Clones (2002)',
                'Star Wars: Episode III - Revenge of the Sith (2005)',
]
star_wars_labels = ['Green'] * len(star_wars_movies)


movie_titles = star_wars_movies + marvel_movies
genre_labels = star_wars_labels + marvel_labels

movie_indices = [movies_index[recommender.get_movie_id_by_name(str(x))] for x in movie_titles]

_, V_k = recommender.recommender_svd(R_filled, k=2)
x, y = V_k[0, movie_indices], V_k[1, movie_indices]
fig = plt.figure()
plt.scatter(x, y, c=genre_labels)
for i, movie_name in enumerate(movie_titles):
    plt.annotate(movie_name, (x[i], y[i]))
    
if not STUDENT_VERSION:
    fig.text(0.5, 0.5, EO_TEXT, transform=fig.transFigure,
        fontsize=EO_SIZE/2, color=EO_COLOR, alpha=EO_ALPHA*0.5, fontname=EO_FONT,
        ha='center', va='center', rotation=EO_ROT)
In [ ]: